Once I get into the swing of outsourcing my life to artificial intelligence, it becomes strangely addictive. “Find me a venue for my 30th birthday,” I type into Microsoft’s new mindbogglingly-smart chatbot ChatGPT, adding “without having to spend more than £300” because — as I have to keep reminding myself — this is a robot that is yet to appreciate my level of financial inadequacy.
The bot whirs into action, offering me a list of pubs and bars to fit my budget. I go for a second attempt and ask it to write a message to my parents about the issue of them maybe, just possibly (if they’d like to!) footing some of the bill on the night. While the bot types out a tactically-worded WhatsApp message, I ask my AI stylist to find me an outfit — fun but not too try-hard, parent-appropriate, oh and ideally something I can wear for my new work headshot next week.
Then I remember the whole point of this exercise is to outsource: why spend hours posing for my headshot when I can just ask an app like Lensa AI to generate the picture for me? Seventy robot-generated headshots later and I opt for a cross between me looking like a TIME Mag cover star and a strangely-chic navy seal, delighted to have avoided the hours of attempts it would have taken me to achieve this level of eyes-open glamour. The £7.99 it cost me to download the app was certainly more cost-efficient than a new wardrobe, even if I am slightly offended that robots have chosen to remove the mole below my right cheek.
It’s Thursday afternoon and I am six hours into what is quickly becoming one of the most efficient weeks of my life. So far, AI has helped me cook up a back-of-the-fridge Mediterranean salad (no mean feat, when the only veg I had left was a carrot and five semi-shrivelled mushrooms), plot a three-day cycling route for my coast-to-coast ride this summer, and think up some inventive Mr & Mrs questions for my friend Kate’s hen do (sorry Kate). So far, so impressive — so I get creative.
“Write me the introduction for an article about outsourcing my life to AI,” I ask the bot, receiving a reassuringly bland, 100-word opener about “the potential benefits and drawbacks of this technology-driven approach to living”. The bots won’t be stealing my job anytime soon, then. Phew. And in the meantime, I’ve gained a highly-intelligent intern who can do all my boring tasks without asking for a lunch break.
Like any intern, however, it might need some fact-checking. According to the Twitter bio ChatGPT has written me, I’ve been working for The Telegraph since 2016 (not true), have ghost-written several books for high-profile clients (definitely not true) and have built an impressive career in the media industry for my witty, engaging style. At least the robots have good taste.
If you’ve acquainted yourself with software like ChatGPT by now, you’ll probably recognise some of the scenarios I’m talking about. The groundbreaking, Microsoft Bing-powered new chatbot has been dominating headlines and dinner table conversations since its launch in November thanks to its ability to instantly respond to complex questions in a freakily humanlike manner (GPT stands for Generative Pre-trained Transformer, if you’re confused by the uncatchy, medical-sounding name).
For many of us, it’s still at the novelty stage. You can give it silly tasks, like “write a rap in the style of Liz Truss” or “produce a complete script for a Succession scene in which Logan Roy is abducted by aliens”. But not everyone sees it purely for its fun side. For a growing number of internet users, ChatGPT has quickly become an indispensable tool for the important, time-consuming tasks in life, like putting together a presentation for their job interview or producing a reading list for their dissertation.
Almost half of Cambridge students have admitted to using ChatGPT for university work (institutions including — ironically — the International Conference on Machine Learning have had to ban authors from using it) and Chancellor Jeremy Hunt supposedly used it to write a recent speech on innovation. Developers OpenAI say they wanted to create a machine that could do anything the human brain could do, and so far it’s not doing too badly: the latest update can write entire novels, generate code and even ace the bar and medical exams within seconds.
Techsperts are already calling ChatGPT’s release a watershed moment in the history of technology, the human race even, and rivals Google and Amazon have now joined the race to own the large language model space (Google released its version, Bard, in March). “This is every bit as important as the PC, as the internet,” Bill Gates told Forbes of the ChatGPT boom back in February. In other words: Google (that is, analogue Google) is officially dumped.
Or is it? As a journalist covering everything from tech to health, politics and education, I am keen to test this theory for myself. Is the tech really as clever as people say it is? Could it actually do my work for me? What are the flaws and biases to be aware of? Not since Covid has a single phenomenon changed every realm of my journalistic remit, from AI replacing personal trainers in the gym to serious ethical dilemmas about whether schools should be allowed to use bots during exam season and whether they signal the end of the entire human race. So in the interests of first-person journalism, I decide to put my investigative hat on and place my life into AI’s trusty or not-so-trusty hands.
“A third of young singles plan to use AI to boost their dating profiles,” is among the first emails to land in my inbox on day one. After almost a decade on and off the digital dating frontline, my love life feels like a fitting if not slightly dangerous place to start. The propositions are everywhere: Dara, an AI matchmaker from dating app Elate, promises to use the same technology as ChatGPT to suggest first date ideas. New AI ‘wingman’ Rizz promises to help users with ‘killer’ chat-up lines. And according to a new study by security firm McAfee, 70 per cent of adults recently fell for a ChatGPT-generated love letter over a real-life poem. Real-life romance, it seems, is officially dead.
But given the hours my friends and I have wasted on Hinge over the years, I’m still intrigued. Isn’t using robot-generated chat-up lines exactly the opposite of what finding a partner is supposed to be about? And should we really be trusting robots with our preferences for a partner’s height or what they like in bed?
A quick glance at my phone reminds me I’ve been sharing some of my most personal data with AI already. AI smartwatch Garmin already takes my heart rate to coach me through my 10ks. AI transcription app Otter listens to all my interviews for work. And AI dating app Hinge has long been using my romantic preferences to draw me a list of men from south-west London who seem to feel strangely passionate about pineapple on pizza.
So can this latest generation of AI chatbots really help? After several attempts at using Dara (don’t outsource your flirting to a bot unless you want Dave from Dalston to think your ideal first date is a hot air balloon ride), I give up on what is essentially a conversation between two robots and decide to ask ChatGPT what to say to an old flame who’s been sliding into my DMs all week. It responds surprisingly thoughtfully, producing admittedly less lol-worthy but definitely more sensible advice than a straw poll of my closest friends.
I find myself wondering whether I should always be turning to robots when I need a listening ear, as many already do with the rise of AI therapists like Woebot and Replika. Then again: wouldn’t this rid us of the bread-and-butter ingredients of human relationships: sharing? I’ve always been flattered when a friend confides in me and I know for a fact that my married friends quite enjoy the thrill of giving advice on the clumsy online dating world they’re missing out on.
So which elements of my life should I be outsourcing to this supposedly humanlike new technology? Given that half the magic of chatbots like ChatGPT is to do the hard thinking for us, I ask the bot itself. The clever new search engine offers me ten examples, from obvious use-cases like holiday planning, scheduling workouts and writing, um, articles to more curious examples like sorting my emails (can it access my inbox?) and analysing my spending habits (is there a Monzo add-on?).
Reassuringly, more thorough questioning of ChatGPT tells me the bot can’t access my emails or bank account — nor should I be inputting sensitive personal data. What it’s more suited for is scouring vast swathes of the internet so you don’t have to, whether it’s trawling Google for restaurant recommendations or putting together succinct explainers on how Penny Mordaunt really did hold that sword for over an hour (short answer: the bot was trained on data that stops at the end of 2021 — so it thinks I’m referring to Mordaunt’s performance at HMS Illustrious in 2014).
I quickly learn that the devil of ChatGPT is not in the detail, but in its ability to cut through the digital noise and find the most useful information within seconds. Research, brainstorming and translation are among the most popular use-cases, but what you use it for is up to you. Dyslexic friends tell me it’s been game-changing for writing scripts and responding to LinkedIn messages. Social media is awash with entrepreneurs referring to ChatGPT as their business partner, helping them to build websites and run social media campaigns at a fraction of the cost of hiring a human developer. Others say they use it as a BFF or practice partner, whether that’s preparing for a job interview or rehearsing for a sensitive conversation with one’s kids about toxic misogynist influencer Andrew Tate.
Recipe-planning comes up consistently among friends — particularly men (is it a male trait to want to streamline the cooking process?). “Results on Google give you a whole life story about how the author had come across their recipe from their great grandmother... ChatGPT gave me the relevant info straight away,” says one pal, Jimmy, who used ChatGPT as his co-host for a dinner party over Easter, asking it creative questions like “give me a cocktail recipe based on hot cross buns”.
Michelin-star chef Chris Galvin recently admitted he was stunned by ChatGPT’s recipe for quails eggs with asparagus, and Welsh long-distance runner Will Renwick tells me he’s been using ChatGPT to generate meal plans because it allows him to be ultra-specific. Ask the bot to give you a recipe for an 800-calorie dinner that contains 50g of protein and four servings of veg and it’ll do just that.
The key, I quickly learn, is to treat the bot like any significant relationship in your life and be a good communicator. The more you put in, the more you’ll get out. As the week goes on, I get better at remembering to spell out exactly what I want, whether it’s “explain the Sudan conflict to me as though I’m a 10-year-old” or “read this 5,000-word article for me and list the 10 key points I need to know”, or even: “I want to post three photos from a friend’s 30th birthday party. We’re laughing in the first one. Write me a caption.”
I turn to Google’s chatbot, Bard, for these ones (unlike ChatGPT, it’s connected to the internet in real-time, so its knowledge doesn’t end in 2021), and it diligently offers several options for my Instagram caption, from “30 is the new 20, right?” to “Happy 30th birthday, best friend!”.
Personality, it transpires, is where the bots are not so hot. Sure, I can ask them to draft that pay rise email to my boss, but what AI will surely never be able to master are the quirky human interactions and nuances that make up real life: the jokey GIF your PT sends you when you’re running three minutes late; the wedding speech that only the BFF who’s known you since school can pull off; the specific neckline on a dress that only my eyes can pick up after years of shameful ASOS scrolling. That’s no offence to the technology — Stitch Fix, which uses an mix of AI algorithms and human stylists, did technically follow my instructions and sent me a V-neck wrap dress — but there are little quirks I understand about my own body shape and how things fit that would be almost impossible to describe to a robot, however well it has been trained.
A ChatGPT fantatic on Twitter suggests I try inputting some of my recent articles into the chatbot to create a “Katie command” and the results do start to sound a little more like me. But despite OpenAI’s insistence that its bot can produce original jokes and hilarious poetry, I still find there’s little in the way of actual, subtle humour in the supposedly witty articles it produces under my name — hardly surprising, given it is essentially just a souped-up version of autocorrect (that’s right: AI doesn’t actually understand what it is saying).
My quest identifies several other perils and pitfalls. Dom — a friend studying an AI and Ethics course at Cambridge and suddenly the most popular person at any dinner party — tells me an average exchange with ChatGPT uses so much energy it amounts to dumping a large bottle of fresh water on the ground, which feels somewhat bleak and counterproductive when we’re talking about technology that could one day be used to save the planet. Another friend, Nat, recounts a strange moral dilemma she faced after using ChatGPT to write her a sympathy card. “It got me over the initial mental block... but I then felt really guilty when I posted it as it didn’t feel as sincere as it should have,” she says.
Since answers can only be generated on the data it is fed, there is also a noticeably American bias and tone to ChatGPT’s answers and, more alarmingly, a bias that clearly favours white men. TikTok is awash with warnings of ChatGPT “thinspo” advice featuring 400-calorie-a-day diets and laxative abuse; ChatGPT’s answers on counterterrorism have been found to propose torturing Iranians and surveilling mosques; and Lensa AI, the AI image-generating app I use for my work headshots, has been heavily criticised for creating often-sexualised images of women.
By the end of day five, I find myself wondering whether outsourcing to AI has made me more efficient or just lazier — and has it actually saved me much time, anyway? Like with any intern, there comes a point where you wonder if the process of outsourcing the task to them was more laboursome than simply doing it yourself. Would I have deliberated over that Instagram caption as much if I’d not had so many to choose from? Won’t I realistically go back and fact-check ChatGPT’s coast-to-coast route anyway? Might this article have sounded more genuine if the whole thing hadn’t been generated by ChatGPT?
Only kidding. As mic-drop as it would’ve been to tell you this whole thing had been written by robots, you’d probably have nodded off before reaching the second paragraph, if the bot’s attempt at writing an intro was anything to go by. After losing my vanity and dating dignity to AI already, I’d at least like to keep my job (the one at the Evening Standard, not The Telegraph).