Does anyone know, or can anyone guess, the business case for predictive text? On phone apps, it is often incredibly difficult to turn off. Why is that, do you think? (The examples I have recent experience with are Facebook and Outlook mobile apps.)
I would have thought that, for AI training purposes, they would want humans typing things and not just regurgitating canned responses. But apparently not?
On Android you can just install another keyboard if your current one doesn’t have a setting for this.
If I had to guess a business case, I’d say that predictive text as a feature gives you a “legitimate reason” to send your typing data to Google or whoever to train the prediction engine, and they want that data.
How come the apps are controlling your keyboard? Shouldn’t it use your phone’s selected keyboard?
I have disabled autocorrect in my (android) keyboard settings. That disables it system wide, works for me.
I assume you don’t mean keyboard text predictions, which would be a different thing, but the platforms.
It’s a new convenience feature. Something they as a platform can shine with, retain users, and set themselves apart from other platforms.
Having training data is not the primary potential gain. It’s user investment, retention, and interaction. Users choosing the generated text is valid training data. Whether they chose similar words, or what was suggested, is still input on user choice.
It does lead to a convergence to a centralized standard speak. With a self-strengthening feedback loop.
Thank you! This makes sense to me
On Android, most apps depend on the keyboard.
- Gboard has a configurable suggestions bar where you can pick words, or not.
- Microsoft SwiftKey works similarly, but it underlines the word you’re typing.
- AnySoftKeyboard works like Swiftkey.
Only exception I’ve seen, is Copilot, which shows the suggested word directly, to be selected with [tab], but you can still type a different one.
I’ve noticed no such behavior on Facebook. Have you checked your keyboard settings?
It’s simple:
Beat the population into learned-helplessness,
& then all the AI molestingware that the device can run, can be running in it.
Desensitization/enforced-learned-helplessness.
It’s just a conditioning-step, is all.
The profit is in having the population not have any privacy left, & living only within the neuromarketing-platforms that the mainstream operating-systems are becoming.
It’s just a step in the suckerpunching humankind, is all.
_ /\ _
I have none of that on my phone, just plain old keyboard.
But the reason it’s everywhere is it’s the new hot thing and every company in the world feels like they have to get on board now or they’ll be potentially left behind, can’t let anyone have a headstart. It’s incredibly dumb and shortsighted but since actually innovating in features is hard and AI is cheap to implement, that’s what every company goes for.
It’s not new, nor is it AI. Predictive text suggestions have been in Android for ages now.
I think predictive text predates even Android and smartphones (but not exactly), when we had to press a key 3 times until specific characters appeared; called T9 and just a dictionary. Having or not having a dictionary suggestion was the difference between life and death. Now the modern smartphone has way more compute power and resources, therefore they can analyze text in more depth. It’s just the logical next step to the plain and simple dictionary.
See, it isn’t new and it isn’t AI, but it’s the same line of development as modern LLMs. They’ve just rebranded existing projects and lines of development as “AI technology” to be marketable.
Is that really the case? Would everything one type to the keyboard be send to the companies and used as training data for AI? Does that any keyboard at all on the smartphone?
Might be that information about when you do and don’t use the output is helpful for training. Like, if you use the output, good sign the output is good.
I’ve seen this in a few places on desktop, and I have no clue why it’s even a feature. I’m not aware of anyone using it anywhere (although to be fair I haven’t thought to ask).
As for why it’s enabled by default, probably for visibility. The easiest way to get people to use a feature is to make them use it and make them explicitly disable it (if even an option). For AI training, they could theoretically just capture typing data and messages regardless of if the feature is enabled/disabled anyway.