Discussion about this post

User's avatar
Venkatesh Rao's avatar

Further comment on the "LLM traps" this post has had to weather. Imagine you invented a time machine, and wanted to write a paper about it. Do you think you'd need to use anti-LLM-summarization traps in order to get people to personally read every last detail eagerly? The tactic is ONLY meaningful if you know your point is fundamentally weak, and you need an easy way to dismiss criticisms as "aha, you didn't actually read it! Gotcha!"

If you think you have done something truly significant, and it isn't popping in the headline and abstract, you've either buried the lede because you don't recognize it yourself, OR you're mistaken. In this case, it's the latter situation. The brain-scan experiments don't prove what the authors think they prove, and the headline narrative is basically false.

Expand full comment
Harsh Gupta's avatar

This post I felt like reading, and didn’t notice or mind it being AI generated.

Somehow I didn’t feel the same about earlier AI generated posts in this serious.

Maybe it is because models are getting better, or you are getting better at prompting, probably both.

Expand full comment
2 more comments...

No posts