Amidst all the storm and drama I’ve been enjoying learning more about just how the current crop of AI applications (LLMs, or Large Language Models) operate. I don’t want to imply that I understand them at a deep level — my programming career peaked with some piecework programming in Visual Basic that I did in grad school. (Actually, there was also that extremely elegant bubble sort program I wrote in middle school. But I digress …) But it has at least given me some sense of the roots of the limitations that are coming to the fore. Specifically — but not only — the way applications like ChatGPT frequently fabricate false claims clothed in the trappings of facticity. I’m not just claiming this new protein structure exists. I’m citing the specific journal article with the page numbers and date and authors, etc. It’s got to be true!
Members-Only Article
Evolving Thoughts on Artificial Intelligence and Large Language Models
|
February 21, 2023 1:53 p.m.
This is a members-only article
Small Team. Big Results.
We’re proud of what our small newsroom has accomplished and it’s not hyperbole when we say that without our members, none of this would be possible.
Free memberships available for students and those experiencing financial hardship.
Already a member? SIGN IN