The NY times has a vested interest in discrediting AI, specifically LLMs (what they seem to be referring to) since journalism is a huge target here since it’s pretty easy to get LLMs to generate believable articles. So how I break down this article:
Lean on Betterridge’s law of headlines to cast doubt about the long term prospects of LLMs
Further the doubt by pointing out people don’t trust them
Present them as a credible threat later in the article
Juxtapose LLMs and cryptocurrencies while technically dismissing such a link (then why bring it up?)
Leave the conclusion up to the reader
I learned nothing new about current or long term LLM viability other than a vague “they took our jerbs!” emotional jab.
AI is here to stay, and it’ll continue getting better. We’ll adapt to how it changes things, hopefully as fast or faster than it eliminates jobs.
The NY times has a vested interest in discrediting AI, specifically LLMs (what they seem to be referring to) since journalism is a huge target here since it’s pretty easy to get LLMs to generate believable articles.
The writers and editors may be against AI, but I’m betting the owners of the NYT would LOVE to have an AI that would simply re-phrase “news” (ahem) “borrowed” from other sources. The second upper management thinks this is possible, the humans will be out on their collective ears.
This would actually explain a lot of the negative AI sentiment I’ve seen that’s suddenly going around.
Some YouTubers have hopped on the bandwagon as well. There was a video posted the other day where a guy attempted to discredit AI companies overall by saying their technology is faked. A lot of users were agreeing with him.
He then proceeded to point out stories about how Copilot/ChatGPT output information that was very similar to a particular travel website.
He also pointed out how Amazon Fresh stores required a large number of outsourced workers to verify shopping cart totals (implying that there was no AI model at all and not understanding that you need workers like this to actually retrain/fine-tune a model).
I would say that 90% of AI companies are fake. They are just running API calls to ChatGP-3, and calling themselves “AI” to get investors. Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.
I don’t think that “fake” is the correct term here. I agree a very large portion of companies are just running API calls to ChatGPT and then patting themselves on the back for being “powered by AI” or some other nonsense.
Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.
This is exactly the point I was referring to before. Just because Amazon is crowdsourcing cheap labor to backup their AI doesn’t mean that the AI is “fake”.
Getting an AI model to work well takes a lot of man hours to continually train and improve it as well as make sure that it is performing well.
Amazon was doing something new (with their shopping cart AI) that no model had been trained on before. Training off of demo/test data doesn’t get you the kind of data that you get when you actually put it into a real world environment.
In the end it looks like there are additional advancements needed before a model like this can be reliable, but even then someone should be asking if AI is really necessary for something like this when there are more reliable methods available.
I honestly don’t understand why they didn’t just use RFID for the grocery stores. Or maybe they are, idk, but it’s cheap and doesn’t require much training to apply. That way you can verify the AI without needing much labor at all.
Then again, I suppose that point wasn’t to make a grocery service, but an optical AI service to sell to others.
That said, a lot of people don’t seem to understand how AI works, and the natural response to not understanding something is FUD.
Unless you pay for expensive tags (like $20 per tag) or use really short range scanners (e.g. a hotel key), RFID tags don’t work reliably enough.
Antitheft RFID tags for example won’t catch every single thief who walks out the door with a product. But if a thief comes back again and again stealing something… eventually one of them will work.
But even unreliable tags are a bit expensive, which is why they are only used on high margin and frequently stolen products (like clothing).
All the self serve stores in my country just use barcodes. They are dirt cheap and work reliably at longer range than a cheap RFID tag. Those stores use AI to flag potential thieves but never for purchases (for example recently I wasn’t allowed to pay for my groceries until a staff member checked my backpack, which the AI had flagged as suspicious).
The purpose of the RFID wouldn’t be to catch thieves, but to train the AI. As the AI gets better at detecting things, you reduce how many of the products are tagged. I’m seeing something like $0.30/ea on Amazon, ~$0.10/ea on Ali Express. I’m guessing an org like Amazon could get them even cheaper. I don’t know how well those work on cans, so maybe it’s a no-go, IDK.
Barcodes could probably work fine too, provided they’re big enough to be visible clearly to cameras.
Regardless, it seems like there are options aside from hiring a bunch of people to watch cameras. I’m interested to hear from someone more knowledgeable about why I’m wrong or whether they’re actually already doing something like this. I don’t live near any of the stores, so I can’t just go and see for myself (and are they still a thing?).
It might not be fake but companies built on top of the OpenAI API don’t bring significant value and won’t last.
If you already have a solid product and want to add some AI capabilities, the the OpenAI API is great. If it’s your only value proposition, not so much.
I’m not actually a nuclear fission company if i take millions of R&D investment, pay me amd my buddy half of it, and then pay a bunch of crackheads to pour diesel into an electric generator.
After reading through that wiki, that doesn’t sound like the sort of thing that would work well for what AI is actually able to do in real-time today.
Contrary to your statement, Amazon isn’t selling this as a means to “pretend” to do AI work, and there’s no evidence of this on the page you linked.
That’s not to say that this couldn’t be used to fake an AI, it’s just not sold this way, and in many applications it wouldn’t be able to compete with the already existing ML models.
Can you link to any examples of companies making wild claims about their product where it’s suspected that they are using this service?
(I couldn’t find any after a quick Google search… but I didn’t spend too much time on it).
I’m wondering if the misunderstanding here is based on the sections here related to AI work? The kind of AI work that you would do with Turkers is the kind of work that’s necessary to prepare the data for it to be used on training a machine learning model. Things like labelling images, transcribing words from images, or (to put it in a way that most of us have already experienced) solving captchas asking you to find the traffic lights (so that you can help train their self-driving car AI model).
Warning, here’s the cynic in me coming out.
The NY times has a vested interest in discrediting AI, specifically LLMs (what they seem to be referring to) since journalism is a huge target here since it’s pretty easy to get LLMs to generate believable articles. So how I break down this article:
I learned nothing new about current or long term LLM viability other than a vague “they took our jerbs!” emotional jab.
AI is here to stay, and it’ll continue getting better. We’ll adapt to how it changes things, hopefully as fast or faster than it eliminates jobs.
Or maybe my tinfoil hat is on too tight.
The writers and editors may be against AI, but I’m betting the owners of the NYT would LOVE to have an AI that would simply re-phrase “news” (ahem) “borrowed” from other sources. The second upper management thinks this is possible, the humans will be out on their collective ears.
No way. NYT depends on their ability to produce high quality exclusive content that you can’t access anywhere else.
In your hypothetical future, NYT’s content would be mediocre and no better than a million other news services. There’s no profit in that future.
This would actually explain a lot of the negative AI sentiment I’ve seen that’s suddenly going around.
Some YouTubers have hopped on the bandwagon as well. There was a video posted the other day where a guy attempted to discredit AI companies overall by saying their technology is faked. A lot of users were agreeing with him.
He then proceeded to point out stories about how Copilot/ChatGPT output information that was very similar to a particular travel website. He also pointed out how Amazon Fresh stores required a large number of outsourced workers to verify shopping cart totals (implying that there was no AI model at all and not understanding that you need workers like this to actually retrain/fine-tune a model).
I would say that 90% of AI companies are fake. They are just running API calls to ChatGP-3, and calling themselves “AI” to get investors. Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.
I don’t think that “fake” is the correct term here. I agree a very large portion of companies are just running API calls to ChatGPT and then patting themselves on the back for being “powered by AI” or some other nonsense.
This is exactly the point I was referring to before. Just because Amazon is crowdsourcing cheap labor to backup their AI doesn’t mean that the AI is “fake”. Getting an AI model to work well takes a lot of man hours to continually train and improve it as well as make sure that it is performing well.
Amazon was doing something new (with their shopping cart AI) that no model had been trained on before. Training off of demo/test data doesn’t get you the kind of data that you get when you actually put it into a real world environment.
In the end it looks like there are additional advancements needed before a model like this can be reliable, but even then someone should be asking if AI is really necessary for something like this when there are more reliable methods available.
I honestly don’t understand why they didn’t just use RFID for the grocery stores. Or maybe they are, idk, but it’s cheap and doesn’t require much training to apply. That way you can verify the AI without needing much labor at all.
Then again, I suppose that point wasn’t to make a grocery service, but an optical AI service to sell to others.
That said, a lot of people don’t seem to understand how AI works, and the natural response to not understanding something is FUD.
Unless you pay for expensive tags (like $20 per tag) or use really short range scanners (e.g. a hotel key), RFID tags don’t work reliably enough.
Antitheft RFID tags for example won’t catch every single thief who walks out the door with a product. But if a thief comes back again and again stealing something… eventually one of them will work.
But even unreliable tags are a bit expensive, which is why they are only used on high margin and frequently stolen products (like clothing).
All the self serve stores in my country just use barcodes. They are dirt cheap and work reliably at longer range than a cheap RFID tag. Those stores use AI to flag potential thieves but never for purchases (for example recently I wasn’t allowed to pay for my groceries until a staff member checked my backpack, which the AI had flagged as suspicious).
The purpose of the RFID wouldn’t be to catch thieves, but to train the AI. As the AI gets better at detecting things, you reduce how many of the products are tagged. I’m seeing something like $0.30/ea on Amazon, ~$0.10/ea on Ali Express. I’m guessing an org like Amazon could get them even cheaper. I don’t know how well those work on cans, so maybe it’s a no-go, IDK.
Barcodes could probably work fine too, provided they’re big enough to be visible clearly to cameras.
Regardless, it seems like there are options aside from hiring a bunch of people to watch cameras. I’m interested to hear from someone more knowledgeable about why I’m wrong or whether they’re actually already doing something like this. I don’t live near any of the stores, so I can’t just go and see for myself (and are they still a thing?).
It might not be fake but companies built on top of the OpenAI API don’t bring significant value and won’t last.
If you already have a solid product and want to add some AI capabilities, the the OpenAI API is great. If it’s your only value proposition, not so much.
Mechanical Turkis a service that Amazon sells to other companies that are trying to pretend to be AI companies. the whole market is full of people making wild claims aboit their product that aren’t true, and them desperately searching for the cheapest labor to actually do it.
I’m not actually a nuclear fission company if i take millions of R&D investment, pay me amd my buddy half of it, and then pay a bunch of crackheads to pour diesel into an electric generator.
After reading through that wiki, that doesn’t sound like the sort of thing that would work well for what AI is actually able to do in real-time today.
Contrary to your statement, Amazon isn’t selling this as a means to “pretend” to do AI work, and there’s no evidence of this on the page you linked.
That’s not to say that this couldn’t be used to fake an AI, it’s just not sold this way, and in many applications it wouldn’t be able to compete with the already existing ML models.
Can you link to any examples of companies making wild claims about their product where it’s suspected that they are using this service? (I couldn’t find any after a quick Google search… but I didn’t spend too much time on it).
I’m wondering if the misunderstanding here is based on the sections here related to AI work? The kind of AI work that you would do with Turkers is the kind of work that’s necessary to prepare the data for it to be used on training a machine learning model. Things like labelling images, transcribing words from images, or (to put it in a way that most of us have already experienced) solving captchas asking you to find the traffic lights (so that you can help train their self-driving car AI model).