4 Comments
User's avatar
Matt Pursley's avatar

Great incites! I think you are right.

I was working on a project scouring patristic uses of hope in relationship with salvation. So many of the primary sources it referenced were simply wrong. It’s helpful in organizing a bibliography but you still have to go through them and systematically make sure the text and context actually bear out the suggested reference. Some were simply just non existent citations LoL. My guess is it pulled from a typo on a published paper.

Expand full comment
Chris Nye's avatar

That is a great point. The amount of out of context quotes—especially from ancient sources—is bad. But I would put that in the "it'll probably be fixed in two years" category. What do you think?

Expand full comment
Matt Pursley's avatar

That would be interesting. A few weeks ago, my wife mentioned reading an article that claimed AI is actually getting worse, not better. It said that without a proper stopgap, AI just keeps compounding information without any filter. But I agree with you — they’ll likely figure out a way to address that.

Im saying that while also actively using AI to edit the typos out my response ;)

Expand full comment
Phil's avatar

As there gets to be more AI generated content on the internet, that's going to end up training future AIs which could lead to degradation of performance. I think that pre-AI content (so prior to about 2022) will end up being at a premium for use for training. Still, that will make it hard for AIs to "know" about anything after that cutoff point.

There's another idea posited by Yan Lacunn, one of the godfathers of AI and head of AI at Meta - he says that current LLM architectures are just always going to be prone to hallucination and that we need some radically different architectures if we're going to overcome that. As with any engineering field people tend to get fixated on the current state of the art architecture and now to improve it (because it's been pretty successful) so there tends to be less focus on questioning the status quo and investigating alternatives that might be better - we call this "getting stuck in a local minima", imagine a terrain with hills and valleys and you want to find the lowest point on this terrain, you roll a ball and it falls into a valley and gets stuck there, but it turns out that's not the lowest point, that's over in the next valley - that's kind of how technology tends to work - we could be getting stuck in a local minima with the current crop of LLMs and it might take quite a while to get out of this valley.

Expand full comment