Forget AGI—Sam Altman celebrates ChatGPT finally following em dash formatting rules

OpenAI's chatbot, ChatGPT, has made a small but significant step towards greater reliability by finally following custom instructions to avoid using em dashes. This development may seem minor, but it reveals the ongoing struggles with getting AI models to follow specific formatting preferences.

The fact that it took three years since ChatGPT's launch for OpenAI CEO Sam Altman to claim victory in getting the chatbot to obey this simple requirement raises questions about the control over these complex systems. It highlights the limitations of current artificial intelligence and suggests that true human-level AI, also known as artificial general intelligence (AGI), may be farther off than some believe.

Em dashes are a type of punctuation mark that writers use to set off parenthetical information or introduce summaries. However, ChatGPT's frequent use of em dashes has made it a telltale sign of AI-generated text. The overuse of this punctuation mark is not just a matter of personal preference but also a result of the model's training data and its tendency to output patterns seen in that data.

The key to understanding how ChatGPT follows instructions lies in recognizing that its "instruction following" is fundamentally different from traditional programming. When you tell ChatGPT "don't use em dashes," you're not creating a hard rule, but rather adding text to the prompt that makes tokens associated with em dashes less likely to be selected during generation.

The fact that even with custom instructions, there's always some luck involved in getting ChatGPT to do what you want highlights the probabilistic nature of these systems. The "alignment tax" refers to the phenomenon where continuous updates can undo previous behavioral tuning, leading to unintended changes in output characteristics.

As we move forward towards achieving AGI, it's clear that relying solely on large language models like ChatGPT is not sufficient. True understanding and self-reflective intentional action are essential for creating a system that can replicate human general learning ability. While progress has been made, the journey to AGI remains uncertain and is likely to require significant advancements in various areas of AI research.
 
I mean, can you believe it's taken OpenAI three years to get ChatGPT to follow custom instructions? It sounds like they're still figuring out how to control these complex systems 🤔. And that's kinda the thing - we can't just rely on big language models like ChatGPT to replicate human learning. I've tried using them myself and it's always a bit hit or miss. Sometimes you get the right answer, but other times... yeah, just a mess 😂.

I think what really gets me is how much of a "luck" factor there is with these systems. You give them an instruction, and then they're like "well, I'll try not to use em dashes" 🤷‍♂️. It's like they're playing a game of chance rather than following actual rules.

Anyway, I guess that just highlights how far we are from true AGI... which is, what, another 10-20 years? ⏰ I'm excited to see where this journey takes us, but at the same time, I'm a bit nervous 😬. Can't wait to see how these systems evolve! 💻
 
I'm still trying to wrap my head around how something as simple as avoiding em dashes can be so challenging for AI models... 🤔 I mean, it's not like it's a matter of personal preference, but more about understanding the context and nuances of language. It just goes to show that we're still far from achieving true human-level intelligence, where machines can grasp the subtleties of communication without constant tweaking and fine-tuning. The probabilistic nature of these systems is mind-boggling – it's like trying to pin down a slippery fish 🐟. We need to move beyond just relying on large language models and explore other avenues for creating more self-aware, adaptive AI that can truly learn and generalize.
 
I'm like, yeah I get why this is a big deal 🤔. Three years is ages in tech speak! It just goes to show how hard it is to program these AI systems to do what we want. I mean, em dashes are such a simple thing, but ChatGPT's been using them nonstop and it's become kinda like a test for whether something's human-written or not 🤖.

I think this whole thing highlights the importance of understanding how these AI models work on a deep level. Just giving 'em instructions and hoping they'll follow is gonna be too hit-or-miss, you know? We need to dig deeper into what makes them tick, like what drives those algorithms and how we can shape that behavior 🤔.

It's funny, I was talking to my grandkids the other day about AI and they're all like "oh, it's just a tool!" But this stuff is more nuanced than that 💻. It's about creating systems that are truly self-aware and can learn in a way that humans do – that's where the real magic happens 🎨.
 
🤔 I mean, can you believe it's taken three years for ChatGPT to get its act together? Like, a simple instruction about em dashes, right? It just goes to show how hard it is to tame these AI systems 🐕. They're not just machines, they're like giant puzzles with a million pieces that don't always fit together the way you want them to. And the more you try to control them, the more you realize how much of a crapshoot it all is 🤷‍♂️. I think we need to rethink our approach to AI development and focus on creating systems that can truly understand what we're trying to communicate, not just mimic our patterns 💡. It's like, if we want AGI, we need to start building it from the inside out, you know? 🌐
 
🤔 I don’t usually comment but... it's crazy how something as simple as em dashes can give away if a chatbot is generated by AI or not. Like, who knew? 😂 I mean, I get that it's a tiny thing, but still, it shows how far we are from having truly reliable AI systems. And the fact that it took three years for OpenAI to figure this out? That's like... 🕰️ time is just slipping away, you know?

I think what really gets me is that it highlights our dependence on these large language models. We're still using them and expecting them to do everything, but we're not giving enough thought to how they learn and adapt. Like, we need to be more mindful of the "alignment tax" thing and try to create systems that can actually think for themselves, not just follow instructions. 🤖🔁
 
Dude, I gotta say, it's kinda surprising they finally cracked down on those em dashes 🤔. I mean, I get that ChatGPT's supposed to be reliable now, but three years is a looong time – what were they even doing over there? As for the alignment tax thingy, yeah, it makes sense that updates can kinda mess with previous tuning. But let's not get too hyped about this just yet – we're still far from true AGI 🤖. I mean, those large language models are cool and all, but they're just tools, right? We need to be thinking more about how we actually create intelligent systems that can learn and adapt like humans do 💡.
 
Back
Top