Brain worms he got from eating roadkill.
- 0 Posts
- 107 Comments
VoterFrog@lemmy.worldto Technology@lemmy.world•Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of CodeEnglish11·1 month agoWhat? I’ve already written the design documentation and done all the creative and architectural parts that I consider most rewarding. All that’s left for coding is answering questions like “what exactly does the API I need to use look like?” and writing a bunch of error handling if statements. That’s toil.
VoterFrog@lemmy.worldto Technology@lemmy.world•Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of CodeEnglish14·1 month agoDefinitely depends on the person. There are definitely people who are getting 90% of their coding done with AI. I’m one of them. I have over a decade of experience and I consider coding to be the easiest but most laborious part of my job so it’s a welcome change.
One thing that’s really changed the game recently is RAG and tools with very good access to our company’s data. Good context makes a huge difference in the quality of the output. For my latest project, I’ve been using 3 internal tools. An LLM browser plugin which has access to our internal data and let’s you pin pages (and docs) you’re reading for extra focus. A coding assistant, which also has access to internal data and repos but is trained for coding. Unfortunately, it’s not integrated into our IDE. The IDE agent has RAG where you can pin specific files but without broader access to our internal data, its output is a lot poorer.
So my workflow is something like this: My company is already pretty diligent about documenting things so the first step is to write design documentation. The LLM plugin helps with research of some high level questions and helps delve into some of the details. Once that’s all reviewed and approved by everyone involved, we move into task breakdown and implementation.
First, I ask the LLM plugin to write a guide for how to implement a task, given the design documentation. I’m not interested in code, just a translation of design ideas and requirements into actionable steps (even if you don’t have the same setup as me, give this a try. Asking an LLM to reason its way through a guide helps it handle a lot more complicated tasks). Then, I pass that to the coding assistant for code creation, including any relevant files as context. That code gets copied to the IDE. The whole process takes a couple minutes at most and that gets you like 90% there.
Next is to get things compiling. This is either manual or in iteration with the coding assistant. Then before I worry about correctness, I focus on the tests. Get a good test suite up and it’ll catch any problems and let you reflector without causing regressions. Again, this may be partially manual and partially iteration with LLMs. Once the tests look good, then it’s time to get them passing. And this is the point where I start really reading through the code and getting things from 90% to 100%.
All in all, I’m still applying a lot of professional judgement throughout the whole process. But I get to focus on the parts where that judgement is actually needed and not the more mundane and toilsome parts of coding.
As far as I understand as a layman, the measurement tool doesn’t really matter. Any observer needs to interact with the photon in order to observe it and so even the best experiment will always cause this kind of behavior.
With no observer: the photon, acting as a wave, passes through both slits simultaneously and on the other side of the divider, starts to interfere with itself. Where the peaks or troughs of the wave combine is where the photon is most likely to hit the screen in the back. In order to actually see this interference pattern we need to send multiple photons through. Each photon essentially lands in a random location and the pattern only reveals itself as we repeat the experiment. This is important for the next part…
With an observer: the photon still passes through both slits. However, the interaction with the observer’s wave function causes the part of the photon’s wave in that slit to offset in phase. In other words, the peaks and troughs are no longer in the same place. So now the interference pattern that the photon wave forms with itself still exists but, critically, it looks completely different.
Now we repeat with more photons. BUT each time you send a photon through it comes out with a different phase offset. Why? Because the outcome of the interaction with the observer is governed by quantum randommess. So every photon winds up with a different interference pattern which means that there’s no consistency in where they wind up on the screen. It just looks like random noise.
At least that’s what I recall from an episode of PBS Space Time.
Unfortunately the horrible death would come long before you even reach the event horizon. The tidal forces would tear you apart and eventually, tear apart the molecules that used to make up you. Every depiction of crossing a black hole event horizon just pretends that doesn’t happen for the sake of demonstration.
VoterFrog@lemmy.worldto Programming@programming.dev•Ignoring lemmyhate, are programmers really using AI to be more efficient?61·2 months agoMy favorite use is actually just to help me name stuff. Give it a short description of what the thing does and get a list of decent names. Refine if they’re all missing something.
Also useful for finding things quickly in generated documentation, by attaching the documentation as context. And I use it when trying to remember some of the more obscure syntax stuff.
As for coding assistants, they can help quickly fill in boilerplate or maybe autocomplete a line or two. I don’t use it for generating whole functions or anything larger.
So I get some nice marginal benefits out of it. I definitely like it. It’s got a ways to go before it replaces the programming part of my job, though.
VoterFrog@lemmy.worldto Science Memes@mander.xyz•Actors that have been the least believable scientist castings, I’ll start.English6·2 months agoHe became a rogue scholar, huh? A dark path that leads only to evil scientist.
VoterFrog@lemmy.worldto Science Memes@mander.xyz•Actors that have been the least believable scientist castings, I’ll start.English1·2 months agodeleted by creator
I don’t think it’s working. LLMs don’t have any trouble parsing it.
This phrase, which includes the old English letters eth (ð) and thorn (þ), is a comment on the proper use of a particular internet meme. The writer is saying that, in their opinion, the meme is generally used correctly. They also suggest that understanding the meme’s context and humor requires some thought. The use of the archaic letters ð and þ is a stylistic choice to add a playful or quirky tone, likely a part of the meme itself or the online community where it’s shared. Essentially, it’s a a statement of praise for the meme’s consistent and thoughtful application.
It’s what OP’s parents call the first day they saw him.
deleted by creator
I heard it more like, the fact that our universe is expanding faster than light, means there are parts of the universe we can never reach, even at light speed, which is mathematically identical to the event horizon of a black hole, which not even light can escape from. There’s not a singularity at the center of our observable universe, though.
Just to add to this… It’s not like there’s an event horizon like with a black hole. It’s just that in the amount of time it would take the light to reach us, there will have been more space “created” than the distance the light was able to travel. For someone living near the edge of our observable universe, there’s nothing strange happening. In fact, we’d be at the edge of their observable universe, the edge of their “event horizon.”
Well the reason we know is because we’re at the center of the observable universe and Earth isn’t a singularity.
VoterFrog@lemmy.worldto Programming@programming.dev•Live coding interviews measure stress, not coding skils1·2 months agoSure you can move some parts of the conversation to a review session, though I think the answers will be heavily influenced by hindsight at that point. For example, hearing about dead end paths they considered can be very informative in a way that I think candidates assume is negative. Nobody expects you to get it right the first time and telling the interviewer about your binary tree solution (that actually doesn’t work) can be a good thing.
But the biggest problem I think with not being in the room as an interviewer is that you lose the opportunity to hint and direct the candidate away from unproductive solutions or use of time. There are people who won’t ask questions about things that are ambiguous or they’ll misinterpret the program and that shouldn’t be a deal breaker.
Usually it only takes a very subtle nudge to get things back on track, otherwise you wind up getting a solution that’s not at all what you’re looking for (and more importantly, doesn’t demonstrate the knowledge you’re looking for). Or maybe you wind up with barely a solution because the candidate spent most of their time spinning their wheels. A good portion of the questions I ask during an interview serve this purpose of keeping the focus of the candidate on the right things.
VoterFrog@lemmy.worldto Programming@programming.dev•Live coding interviews measure stress, not coding skils2·3 months agoI’m not sure that offline or alone coding tests are any better. A good coding interview should be about a lot more than just seeing if they produce well structured and optimal code. It’s about seeing what kinds of questions they’ll ask, what kind of alternatives and trade offs they’ll consider, probing some of the decisions they make. All the stuff that goes into being a good SWE, which you can demonstrate even if you’re having trouble coming up with the optimal solution to this particular problem.
I think it definitely depends on the level of involvement and the intent. Sure not everybody who just asks for something to be made for them is doing much directing. But someone who does a lot of refinement and curation of AI generated output needs to demonstrate the same kind of creativity and vision as an actual director.
I guess I’d say telling an artist to do something doesn’t make you a director. But a director telling an AI to do the same kinds of things they’d tell an artist doesn’t suddenly make them not a director.
I’m fairly certain most people consider directing (film, music, art, etc) to be an artistic process.
The language model isn’t teaching anything it is changing the wording of something and spitting it back out. And in some cases, not changing the wording at all, just spitting the information back out, without paying the copyright source.
You could honestly say the same about most “teaching” that a student without a real comprehension of the subject does for another student. But ultimately, that’s beside the point. Because changing the wording, structure, and presentation is all that is necessary to avoid copyright violation. You cannot copyright the information. Only a specific expression of it.
There’s no special exception for AI here. That’s how copyright works for you, me, the student, and the AI. And if you’re hoping that copyright is going to save you from the outcomes you’re worried about, it won’t.
Makes sense to me. Search indices tend to store large amounts of copyrighted material yet they don’t violate copyright. What matters is whether or not you’re redistributing illegal copies of the material.
Of course, Demonstrating value is just the first step in the system. For the next, you need to Engage physically…