It takes about a billion qbits to break 2048bit encryption, so a while. I saw something about reducing it to about 20 million qbits recently, but it’s still a while off.
Hash functions are not known to be quantum vulnerable (i.e., there’s no known quantum algorithm that provides an exponential speedup, best you can do is to use Grover’s algorithm to slightly speed up the brute force search). So maybe never.
They’re one way functions. Encryption requires decryption, so you cannot lose information.
Hash functions are meant to lose information. They cannot be reversed. What they’re good at is verification; do you have the right password? Do you have a proof that this is your message and not someone else’s?
We already use hash functions where they make sense, but the parent is not entirely right; not all hashes and signatures are equals. Some are very quantum susceptible. Those will likely be broken real soon (think years, not decades). Some are quantum resistant.
Hashing is “one way” and produces a fixed length output. It’s useful for things like knowing if data has been modified or in the case of passwords, it’s a way to store a value that lets you check a password is correct without storing the password itself.
You cannot “reverse” a hash by design.
Encryption is reversible, you need to be able to get the original data back.
We do use both together in various ways, wtf encrypt data to protect it and then hash the data to make sure it hasn’t been modified. They go hand in hand.
IIRC, those several million qubit computers out there right now aren’t really comparable, either. They’re using a ton of qubits, expecting a lot of them to fall out of superposition, but hoping they have enough to get a useful result. IBM’s approach is to try to get the most out of every qubit. Either approach is valid, but IBM’s 1000 qubits can’t be directly compared to millions of qubits used elsewhere.
So, web encryption broken when? Now?
It takes about a billion qbits to break 2048bit encryption, so a while. I saw something about reducing it to about 20 million qbits recently, but it’s still a while off.
More importantly, how long until I can guarantee a 51% chance of solving every bitcoin block?
Hash functions are not known to be quantum vulnerable (i.e., there’s no known quantum algorithm that provides an exponential speedup, best you can do is to use Grover’s algorithm to slightly speed up the brute force search). So maybe never.
Removed by mod
They’re one way functions. Encryption requires decryption, so you cannot lose information.
Hash functions are meant to lose information. They cannot be reversed. What they’re good at is verification; do you have the right password? Do you have a proof that this is your message and not someone else’s?
We already use hash functions where they make sense, but the parent is not entirely right; not all hashes and signatures are equals. Some are very quantum susceptible. Those will likely be broken real soon (think years, not decades). Some are quantum resistant.
Removed by mod
Hashing is “one way” and produces a fixed length output. It’s useful for things like knowing if data has been modified or in the case of passwords, it’s a way to store a value that lets you check a password is correct without storing the password itself.
You cannot “reverse” a hash by design.
Encryption is reversible, you need to be able to get the original data back.
We do use both together in various ways, wtf encrypt data to protect it and then hash the data to make sure it hasn’t been modified. They go hand in hand.
Removed by mod
IIRC, those several million qubit computers out there right now aren’t really comparable, either. They’re using a ton of qubits, expecting a lot of them to fall out of superposition, but hoping they have enough to get a useful result. IBM’s approach is to try to get the most out of every qubit. Either approach is valid, but IBM’s 1000 qubits can’t be directly compared to millions of qubits used elsewhere.
deleted by creator
CA doesn’t have your private key