• 0 Posts
  • 107 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle



  • Definitely depends on the person. There are definitely people who are getting 90% of their coding done with AI. I’m one of them. I have over a decade of experience and I consider coding to be the easiest but most laborious part of my job so it’s a welcome change.

    One thing that’s really changed the game recently is RAG and tools with very good access to our company’s data. Good context makes a huge difference in the quality of the output. For my latest project, I’ve been using 3 internal tools. An LLM browser plugin which has access to our internal data and let’s you pin pages (and docs) you’re reading for extra focus. A coding assistant, which also has access to internal data and repos but is trained for coding. Unfortunately, it’s not integrated into our IDE. The IDE agent has RAG where you can pin specific files but without broader access to our internal data, its output is a lot poorer.

    So my workflow is something like this: My company is already pretty diligent about documenting things so the first step is to write design documentation. The LLM plugin helps with research of some high level questions and helps delve into some of the details. Once that’s all reviewed and approved by everyone involved, we move into task breakdown and implementation.

    First, I ask the LLM plugin to write a guide for how to implement a task, given the design documentation. I’m not interested in code, just a translation of design ideas and requirements into actionable steps (even if you don’t have the same setup as me, give this a try. Asking an LLM to reason its way through a guide helps it handle a lot more complicated tasks). Then, I pass that to the coding assistant for code creation, including any relevant files as context. That code gets copied to the IDE. The whole process takes a couple minutes at most and that gets you like 90% there.

    Next is to get things compiling. This is either manual or in iteration with the coding assistant. Then before I worry about correctness, I focus on the tests. Get a good test suite up and it’ll catch any problems and let you reflector without causing regressions. Again, this may be partially manual and partially iteration with LLMs. Once the tests look good, then it’s time to get them passing. And this is the point where I start really reading through the code and getting things from 90% to 100%.

    All in all, I’m still applying a lot of professional judgement throughout the whole process. But I get to focus on the parts where that judgement is actually needed and not the more mundane and toilsome parts of coding.


  • VoterFrog@lemmy.worldtoScience Memes@mander.xyzobserves your slit
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    As far as I understand as a layman, the measurement tool doesn’t really matter. Any observer needs to interact with the photon in order to observe it and so even the best experiment will always cause this kind of behavior.

    With no observer: the photon, acting as a wave, passes through both slits simultaneously and on the other side of the divider, starts to interfere with itself. Where the peaks or troughs of the wave combine is where the photon is most likely to hit the screen in the back. In order to actually see this interference pattern we need to send multiple photons through. Each photon essentially lands in a random location and the pattern only reveals itself as we repeat the experiment. This is important for the next part…

    With an observer: the photon still passes through both slits. However, the interaction with the observer’s wave function causes the part of the photon’s wave in that slit to offset in phase. In other words, the peaks and troughs are no longer in the same place. So now the interference pattern that the photon wave forms with itself still exists but, critically, it looks completely different.

    Now we repeat with more photons. BUT each time you send a photon through it comes out with a different phase offset. Why? Because the outcome of the interaction with the observer is governed by quantum randommess. So every photon winds up with a different interference pattern which means that there’s no consistency in where they wind up on the screen. It just looks like random noise.

    At least that’s what I recall from an episode of PBS Space Time.


  • VoterFrog@lemmy.worldtoScience Memes@mander.xyzOn Black Holes...
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    2 months ago

    Unfortunately the horrible death would come long before you even reach the event horizon. The tidal forces would tear you apart and eventually, tear apart the molecules that used to make up you. Every depiction of crossing a black hole event horizon just pretends that doesn’t happen for the sake of demonstration.


  • My favorite use is actually just to help me name stuff. Give it a short description of what the thing does and get a list of decent names. Refine if they’re all missing something.

    Also useful for finding things quickly in generated documentation, by attaching the documentation as context. And I use it when trying to remember some of the more obscure syntax stuff.

    As for coding assistants, they can help quickly fill in boilerplate or maybe autocomplete a line or two. I don’t use it for generating whole functions or anything larger.

    So I get some nice marginal benefits out of it. I definitely like it. It’s got a ways to go before it replaces the programming part of my job, though.




  • VoterFrog@lemmy.worldtoScience Memes@mander.xyzOKBuddyGalaxyBrain
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 months ago

    I don’t think it’s working. LLMs don’t have any trouble parsing it.

    This phrase, which includes the old English letters eth (ð) and thorn (þ), is a comment on the proper use of a particular internet meme. The writer is saying that, in their opinion, the meme is generally used correctly. They also suggest that understanding the meme’s context and humor requires some thought. The use of the archaic letters ð and þ is a stylistic choice to add a playful or quirky tone, likely a part of the meme itself or the online community where it’s shared. Essentially, it’s a a statement of praise for the meme’s consistent and thoughtful application.




  • VoterFrog@lemmy.worldtoScience Memes@mander.xyzBlack Holes
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 months ago

    I heard it more like, the fact that our universe is expanding faster than light, means there are parts of the universe we can never reach, even at light speed, which is mathematically identical to the event horizon of a black hole, which not even light can escape from. There’s not a singularity at the center of our observable universe, though.

    Just to add to this… It’s not like there’s an event horizon like with a black hole. It’s just that in the amount of time it would take the light to reach us, there will have been more space “created” than the distance the light was able to travel. For someone living near the edge of our observable universe, there’s nothing strange happening. In fact, we’d be at the edge of their observable universe, the edge of their “event horizon.”



  • Sure you can move some parts of the conversation to a review session, though I think the answers will be heavily influenced by hindsight at that point. For example, hearing about dead end paths they considered can be very informative in a way that I think candidates assume is negative. Nobody expects you to get it right the first time and telling the interviewer about your binary tree solution (that actually doesn’t work) can be a good thing.

    But the biggest problem I think with not being in the room as an interviewer is that you lose the opportunity to hint and direct the candidate away from unproductive solutions or use of time. There are people who won’t ask questions about things that are ambiguous or they’ll misinterpret the program and that shouldn’t be a deal breaker.

    Usually it only takes a very subtle nudge to get things back on track, otherwise you wind up getting a solution that’s not at all what you’re looking for (and more importantly, doesn’t demonstrate the knowledge you’re looking for). Or maybe you wind up with barely a solution because the candidate spent most of their time spinning their wheels. A good portion of the questions I ask during an interview serve this purpose of keeping the focus of the candidate on the right things.


  • I’m not sure that offline or alone coding tests are any better. A good coding interview should be about a lot more than just seeing if they produce well structured and optimal code. It’s about seeing what kinds of questions they’ll ask, what kind of alternatives and trade offs they’ll consider, probing some of the decisions they make. All the stuff that goes into being a good SWE, which you can demonstrate even if you’re having trouble coming up with the optimal solution to this particular problem.


  • I think it definitely depends on the level of involvement and the intent. Sure not everybody who just asks for something to be made for them is doing much directing. But someone who does a lot of refinement and curation of AI generated output needs to demonstrate the same kind of creativity and vision as an actual director.

    I guess I’d say telling an artist to do something doesn’t make you a director. But a director telling an AI to do the same kinds of things they’d tell an artist doesn’t suddenly make them not a director.



  • The language model isn’t teaching anything it is changing the wording of something and spitting it back out. And in some cases, not changing the wording at all, just spitting the information back out, without paying the copyright source.

    You could honestly say the same about most “teaching” that a student without a real comprehension of the subject does for another student. But ultimately, that’s beside the point. Because changing the wording, structure, and presentation is all that is necessary to avoid copyright violation. You cannot copyright the information. Only a specific expression of it.

    There’s no special exception for AI here. That’s how copyright works for you, me, the student, and the AI. And if you’re hoping that copyright is going to save you from the outcomes you’re worried about, it won’t.