Post by yamanhosen5657 on Mar 6, 2024 10:31:06 GMT
This used to be a bigger problem with ChatGPT, and the AI has already improved a lot. When I tested it a few months ago, I asked for a list of popular blogs that covered remote work, and its suggestions were terrible. Of the five, three had dead links, one was the marketing site for a psychic, and one was a remote work job site. When I tested it with GPT-3, its suggestions were a bit better—there were no psychics at least—but there are still a few problems. ChatGPT results for 5 popular blogs that cover remote work The first three links 404ed, even though Remote.co, Zapier, and FlexJobs all have blogs.
The Remote Revolution doesn't seem to exist, at least not at that URL. The Buffer link is ok, though it redirects to a different URL and is more of a roundup blog post than a link to a blog about remote work. And even with GPT-4, ChatGPT suggestions were a bit of a mixed bag. GPT-4 results for the same prompt Only the first URL 404s now, but this is really a Panama mobile number list list of four job sites and a psychic's blog—and even it recommends I use Google instead. Regardless of which model I used, it's clear ChatGPT hasn't done very well here. Sure, the list is presented clearly, but as GPT-4 itself says, I would get much better results just by Googling "remote work blogs." At least then, all the URLs would be accurate, and Google would have identified which were the best or most trustworthy and brought them toward the top.
And what about questions with clear factual answers? When I asked ChatGPT with GPT-3 for the average rainfall in Dublin during June, July, and August, it gave me answers that were close to the official values, but still wrong. ChatGPT results for average rainfall at Dublin airport Again, this is information that's easy to find online with a quick Google search (you might not even have to click, depending on how you phrase it), but ChatGPT is happy to present the wrong answers concisely, coherently, and authoritatively. The only way to know that it's lied to you is to go and check the details yourself.
The Remote Revolution doesn't seem to exist, at least not at that URL. The Buffer link is ok, though it redirects to a different URL and is more of a roundup blog post than a link to a blog about remote work. And even with GPT-4, ChatGPT suggestions were a bit of a mixed bag. GPT-4 results for the same prompt Only the first URL 404s now, but this is really a Panama mobile number list list of four job sites and a psychic's blog—and even it recommends I use Google instead. Regardless of which model I used, it's clear ChatGPT hasn't done very well here. Sure, the list is presented clearly, but as GPT-4 itself says, I would get much better results just by Googling "remote work blogs." At least then, all the URLs would be accurate, and Google would have identified which were the best or most trustworthy and brought them toward the top.
And what about questions with clear factual answers? When I asked ChatGPT with GPT-3 for the average rainfall in Dublin during June, July, and August, it gave me answers that were close to the official values, but still wrong. ChatGPT results for average rainfall at Dublin airport Again, this is information that's easy to find online with a quick Google search (you might not even have to click, depending on how you phrase it), but ChatGPT is happy to present the wrong answers concisely, coherently, and authoritatively. The only way to know that it's lied to you is to go and check the details yourself.