More usable for the average user and more supported by actual sites and services, so yes.
More usable for the average user and more supported by actual sites and services, so yes.
The passkey stored locally in some kind of hardware backed store on your device or in your password manager is the first factor: something you have.
The PIN/password or fingerprint/face to unlock the device and access the stored passkey is the second factor: something you know or something you are, respectively.
Two factors gets you to 2FA.
And the fewer times that people are entering their password or email/SMS-based 2FA codes because they’re using passkeys, the less of an opportunity there is to be phished, even if the older authentication methods are still usable on the account.
You do realize that your biometric authentication techniques don’t actually send your biometrics (e.g. fingerprint/face) to the website you’re using and that you are actually just registering your device and storing a private key? Your biometrics are used to authenticate with your local device and unlock a locally-stored private key.
That private key is essentially what passkeys are doing, storing a private key either in a password manager or locally on device backed by some security hardware (e.g. TPM, secure enclave, hardware-backed keystore).
Just want to point out, that apparently WordPress.org is not owned by the foundation but rather Matt himself, which many people are confused about. It should probably not be used as a stand-in way to refer to the foundation.
https://www.pluginvulnerabilities.com/2024/09/30/who-owns-the-wordpress-website-and-wordpress-org/
Matt Mullenweg Apparently Personally Owns the Website
The author of the post quoted in the previous section seems to treat it as a given the Matt Mullenweg owns the WordPress website. The closest we have found to confirmation of that is screenshots apparently from a WordPress Slack were he apparently wrote this:
W.org belongs to me, it’s not part of the foundation or any trust, I run it in an open way that allows lots of folks to participate but they don’t own it.
And this:
I have direct and root access to the account (and everything on w.org) because I started it.
It’s an improvement over the current systems. Incremental improvements to the state of things can be a good thing too.
Composable moderation/custom labeling and custom algorithmic feeds are two things that Mastodon doesn’t have that Bluesky does.
Isn’t the main problem that most people don’t use the E2E encrypted chat feature on Telegram, so most of what’s going on is not actually private and Telegram does have the ability to moderate but refuses to (and also refuses to cooperate)?
Something like Signal gets around this by not having the technical ability to moderate (or any substantial data to hand over).
Before people can be persuaded to use them, we have to persuade or force the companies and sites to support them.
A multi-billion dollar social media company sued an ad industry group that was trying to have help companies have some kind of brand safety standards to prevent a company’s ads from appearing next to objectionable content. They reportedly had two full-time staff members. This isn’t some big win, it’s bullying itself.
Basically with passkeys you have a public/private key pair that is generated for each account/each site and stored somewhere on your end somehow (on a hardware device, in a password manager, etc). When setting it up with the site you give your public key to the site so that they can recognize you in the future. When you want to prove that it’s you, the website sends you a unique challenge message and asks you to sign it (a unique message to prevent replay attacks). There’s some extra stuff in the spec regarding how the keys are stored or how the user is verified on the client side (such as having both access to the key and some kind of presence test or knowledge/biometric factor) but for the most part it’s like certificates but easier.
Don’t most DoH resolversl settings have you enter the IP (for the actual lookup connection) along with the hostname of the DoH server (for cert validation for HTTPS)? Wouldn’t this avoid the first lookup problem because there would be a certificate mismatch if they tried to intercept it?
With a breach of this size, I think we’re officially at the point where the data about enough people is out there and knowledge based questions for security should be considered unsafe. We need to come up with different authentication methods.
They definitely knew it would impact their ad business but I think what did it was the competition authorities saying they couldn’t do it to their competitors either, even if they were willing to take the hit on their own services.
Impact on their business (bold added): https://support.google.com/admanager/answer/15189422
- Programmatic revenue impact without Privacy Sandbox: By comparing the control 2 arm to the control 1 arm, we observed that removing third-party cookies without enabling Privacy Sandbox led to -34% programmatic revenue for publishers on Google Ad Manager and -21% programmatic revenue for publishers on Google AdSense.
- Programmatic revenue impact with Privacy Sandbox: By comparing the treatment arm to control 1 arm, we observed that removing third-party cookies while enabling the Privacy Sandbox APIs led to -20% and -18% programmatic revenue for Google Ad Manager and Google AdSense publishers, respectively.
For scenario one, they totally need to delete the data used for age verification after they collect it according to the law (unless another law says they have to keep it) and you can trust every company to follow the law.
For scenario two, that’s where the age verification requirements of the law come in.
No, no, no, it’s super secure you see, they have this in the law too:
Information collected for the purpose of determining a covered user’s age under paragraph (a) of subdivision one of this section shall not be used for any purpose other than age determination and shall be deleted immediately after an attempt to determine a covered user’s age, except where necessary for compliance with any applicable provisions of New York state or federal law or regulation.
And they’ll totally never be hacked.
From the description of the bill law (bold added):
https://legislation.nysenate.gov/pdf/bills/2023/S7694A
To limit access to addictive feeds, this act will require social media companies to use commercially reasonable methods to determine user age. Regulations by the attorney general will provide guidance, but this flexible standard will be based on the totality of the circumstances, including the size, financial resources, and technical capabilities of a given social media company, and the costs and effectiveness of available age determination techniques for users of a given social media platform. For example, if a social media company is technically and financially capable of effectively determining the age of a user based on its existing data concerning that user, it may be commercially reasonable to present that as an age determination option to users. Although the legislature considered a statutory mandate for companies to respect automated browser or device signals whereby users can inform a covered operator that they are a covered minor, we determined that the attorney general would already have discretion to promulgate such a mandate through its rulemaking authority related to commercially reasonable and technologically feasible age determination methods. The legislature believes that such a mandate can be more effectively considered and tailored through that rulemaking process. Existing New York antidiscrimination laws and the attorney general’s regulations will require, regardless, that social media companies provide a range of age verification methods all New Yorkers can use, and will not use age assurance methods that rely solely on biometrics or require government identification that many New Yorkers do not possess.
In other words: sites will have to figure it out and make sure that it’s both effective and non-discriminatory, and the safe option would be for sites to treat everyone like children until proven otherwise.
Doesn’t necessarily need to be anyone with a lot of money, just a lot of people mass reporting things combined with automated systems.
It’s like an automated tipofmytongue but for everything you do on your computer.
What separate auth operation is needed besides authenticating with the local device to unlock a passkey?