You are viewing a single comment's thread from:

RE: OpenAI's Latest LLM - First Model That Reasons, But Also Deceives and Attempts to Self-Preserve

in Proof of Brain3 days ago

I guess it's complex to say the least. But the name "Chain-of-Thought" sounds cool. Like following a thread of a thought from its inception to wherever the end of the thought is before its uttered or compels one to do an action or neither of the two...

Sort:  

It's actually pretty cool. From what I've seen in screenshots, the user actually sees the CoT of o1 before it outputs the answer. Sees where it thinks and what it thinks about... Sort of, but quite powerful for the initial iteration of such a feature.