Artificial Mediocrity or Mediocre Prompting?

in Proof of Brain18 days ago

AI is Artificial Mediocrity... I read this beginning of a title from @vimukthi one of the other days, and got curious what that was about... Wasn't AI supposed to become the smarted thing on the planet? Well, he included an interview with Edward Snowden in his post, and in that interview, at some point, Snowden says something to the effect that AI is trained on so much data that the exceptional gets lost in the average. At the same time, later in the interview he is also very excited about all the things he can do with generative AIs installed on his machine (as opposed to using centralized ones accessed via a browser). So he is not dismissive of AIs.

Now, regarding AI standing for Artificial Mediocrity, as @vimukthi interestingly put it, I do think there is some truth to it on different levels.

Firstly, we often hear that one of generative AIs bottlenecks is the need for more data, and this need grows exponentially. There's more to it than that. If you listen to people involved in the generative AI phenomenon, you won't hear them talk about data in general, but rather quality data. That's what they are lacking more than data, in general, although eventually they'll run out of all kind of data, unless they'll start generate synthetic data or collect much more of it, like from Tesla cars, drones, satellites, surveillance cameras etc. many of them posing serious privacy or security concerns.

The emphasis on that word "quality" they use, says that they are probably not pleased with the limitations existing training data puts on the smartness of current models.

The other side of the mediocrity equation is the one induced. If people, on average, will start using their brains even less than today (yes, that is possible!), and shift tasks they used their brains for to AI, that is a path for mediocrity, or worse. But that's more of a philosophical and sociological discussion for the future than a current situation. Important, nonetheless.

However, currently, generative AIs' quality of responses is often correlated to the quality of prompting and feedback it receives from the person using them.


The future of AI prompting?

Prompting, tweaking prompts, is almost a science, and often a bad response from the AI is the fault of the person asking the question rather than the AI itself. And our experience in prompting also grows by using it. No one will create the perfect prompt from the start. Even experts, sometimes have to nudge the AI in the right direction with a number of followups before they receive the answer they are looking for. Seems to me kind of like a teacher asking a student a series a questions until they get from them what they want.

Sometimes questions are not enough. You need to provide further context or to tell the AI when it is going in the wrong direction compared to where you want to reach. There are situations when the AIs will refuse to go certain routes. I haven't grilled any of them in areas where they are likely to say no, so I haven't got a no yet.

One other trick that Claude's character and personality researcher revealed to us in a podcast I talked about in a previous post, was that she once asked the model to take its time and come out with the best answer it can to the question being asked. She said it worked, and the poem Claude wrote was much better than its average. She also said that AIs by default try to optimize the time to deliver the answer, and that means the quality of the answer may drop.

Better prompting is a way to avoid becoming lazy thinkers, by the way.

Posted Using InLeo Alpha

Sort:  

In reality, I think it'll get better, but there are huge "meh" moments with AI and sometimes it just contradicts the idea that they're supposed to be smart.

I agree. To be honest, we aren't so smart all the time either, lol. But I understand what you say and you are right. And they will most likely improve.

We can be really "not too wise" I guess that's why we created a tech that's can should be smart at all times.

It's an interesting way to look at things but AI does try to give out results fast. So it doesn't spend as much time doing a great job. I do think that there is a lot of things that can be improved but it's hard to replicate the human brain when it comes to things and you never know if parts of the training data were incorrect either.

you never know if parts of the training data were incorrect either

There is a post-training phase both from the team or partners and from users which may rectify bad information received in training. You can actually tell an AI model when it's wrong. If they aren't 100% sure, they will accept your feedback. But if they are sure, and you try to convince them of the opposite (for example try to tell them that the capital of France is London), they will disagree with you. The problem is if you don't know and take the information for granted.

Mostly I havent got so much good results out of AI, it does seem mediocre at best. Parallel to the AI trend, searching seems to have gotten much worse with Google, apart from some local business searches, the rest just spurts out garbage.

I've got both really bad and really good results from AIs. I also saw relatively often hallucinations of generative AIs. But generally, for a quick search, it's better than the legacy search, especially if it's difficult to create a clear search query that won't rise to the top of the results all sorts of unrelated topics. And for a more complex situation, you usually can guide the AI to what you want to find out, if you are patient.

Loading...