Discussion about this post

User's avatar
Dylan Black's avatar

Seems harsh. I love using the LLM as a rubber duck, *especially* for brainstorming, and this rubber duck talks back! Usefully, in the main. Sure it’s often subtly wrong, but then again, so am I. Dwarkesh’s sin seems to be insufficient domain knowledge to properly sanity-check the LLM’s reasoning?

The bullshit velocity is increased, and LLM BS *sounds* dangerously plausible, but in my experience at work, the actually-good-idea velocity has also increased.

Nevertheless, a valuable cautionary tale.

Expand full comment
Lydia Nottingham's avatar

i saw this post, really liked it, then immediately started using chatgpt to interpret a friend's message.

vibe-thinking is suboptimal, but it's going to happen. what are the mitigation tactics / ways to make it less destructive?

i also need to think more about the RL sample-efficiency thing. feel like daniel hasn't gotten to the end of this

Expand full comment
3 more comments...

No posts

Ready for more?