Can you use AI for nutrition advice?
I have mixed feelings about AI.
Sure, it’s fun to participate in trends where you get to turn your dog into a human, or yourself into a collectible figurine,
I maaay have also used Chat GPT to ask if I’m unreasonable to be annoyed by someone’s actions. I can be assured that Chatty will keep a secret, and it doesn’t make me feel like the bad guy to have a little vent about someone I genuinely care about, when I know that the robot I'm talking to has no relationship with the other person.
AI is nice to me, and gives fleshed-out answers that make sense, often just as helpful as what a friend would say.
And indeed, this is what makes it quite problematic.
There is already a big issue with the internet where people end up falling into and dwelling in echo chambers of people who share the same opinions.
Because the creators of social media apps implement algorithms designed to keep you engaged for as long as possible, the bursts of dopamine you get from being validated for your particular beliefs are essentially breadcrumbs that Mark Zuckerberg and co are feeding you to reward you in helping Mr Lizard-Man harvest more of our precious, delicious personal data.
I’ll admit I’ve fallen into multiple echo chambers in my lifetime, both on social media and in real life. I’m probably still a participant in certain echo chambers, I think most of us who use the internet are.
Each time I’ve had a particularly strong belief or felt passionate about something, it’s often because that opinion has been reinforced by the consistent stream of voices around me that also support it.
When you’re in an echo chamber, every day is like a practice rehearsal for the next time you get into an argument with somebody who disagrees with you.
Echo chambers are prevalent on the fitness and wellness side of the internet. They arise when somebody has an existing opinion and then seeks evidence, either peer-reviewed studies or anecdotal, to build an entire case for it. But here's the thing - you can find studies and stories to create a claim for just about anything.
This is how most diet books are written. Somebody tries something out and then seeks research for why it works. Then they look for stories of other people who’ve also tried that approach, and tie it all together to say, ‘See! We’ve been lied to all along, THIS is the true key to well-being!’
Generally, the more ‘out there’ or controversial a claim, the better their book will sell. So yes, this is me telling you that any book that tells you not to eat vegetables should be thrown in the bin. These are written by grown adults with the taste preferences of 4 year olds, trying to justify the fact that they can’t take a shit and their cholesterol is through the roof.
I get very wound up about bad diet claims. Anyway, back to Chat GPT.
Unfortunately, Chat GPT is not the scientifically literate bestie we can summon for all our fact-checking needs.
You see, the same way that charlatans on the internet can stitch together random evidence to construct an argument more aggressive than any Karen versus Retail Manager stand-off you’ve ever seen…
Chat GPT is programmed to do exactly that.
Chat GPT will find the specific studies and articles you’d want to pinpoint, highlight their results and use that to build a case…
But it can’t (just yet, anyway) provide the nuance required to help you conclude whether or not this may apply in all scenarios.
There have been times I’ve asked Chat GPT more scientific questions related to nutrition out of curiosity, or even just to write a basic workout plan for me, and have been able to notice inaccuracies and flaws in Chat’s answers - not because they’re obvious, but because I’m asking it questions from an area of expertise that I’ve studied and worked in which means I’m able to point them out.
I fear that people are putting their blind faith in AI without questioning whether it is just pulling out random studies that will support any side of any argument.
Chat GPT doesn’t have an opinion; it’s not doing the keto diet itself, but it could tell you why you should do it (or why you shouldn’t), depending on what you ask it. We’re about to enter an era where people are just doing whatever they think is correct, and justifying it because ‘Chat GPT told me to’.
At the end of the day, it’s fine, and admittedly exciting, to recognise the potential that AI can offer us. I know how incredible AI is - and I do think we can take advantage of it in a way that helps us. But I also think we should proceed with caution when using AI to replace conversations with experts, professionals and other humans. It’s engaging with real humans that helps us to grow and expand our knowledge, not the overconsumption of regurgitated information AI has cherry-picked for us.
(In short, plz robots, don’t steal my job from me.)
I’d love to hear your takes.