It sounds like you’re confusing the application with the data. Nothing in this model requires the use of production data.
Canadian software engineer living in Europe.
It sounds like you’re confusing the application with the data. Nothing in this model requires the use of production data.
I feel like you must have read an entirely different post, which must be a failing in my writing.
I would never condone baking secrets into a compose file, which is why the values in compose.yaml
aren’t secrets. The idea is that your compose file is used exclusively for testing and development, where the data isn’t real, and the priority is easing development. When you deploy, you don’t use that compose file because your environment is populated by whatever you use in production (typically Kubernetes these days).
You should not store your development database password in a .env
file because it’s not a secret. The AWS keys listed in the compose are meant to be exactly as they are there: XXX
, because LocalStack doesn’t care what these values are, only that they exist.
As for the CLI thing, again I think you’ve missed the point. The idea is to start from a position of “I’m building images” and therefore neve have a “local app, (Django, sqlite)” because sqlite should not be used unless that’s what’s used in production. There should be little to no difference between development and production, so scripting a bridge between these doesn’t make a lot of sense to me.
I don’t mean to be snarky, but I feel like you didn’t actually read the post 'cause pretty much everything you’ve suggested is the opposite of what I was trying to say.
.json
or .env
files. The litmus test here is: “How many steps does it take to get this project running?” If it’s more than 1 (docker compose up
) it’s too many.High praise! Just keep in mind that my blog is a mixed bag of topics. A little code, lots of politics, and some random stuff to boot.
It’s a tough one, but there are a few options.
For AWS, my favourite one is LocalStack, a Docker image that you can stand up like any other service and then tell it to emulate common AWS services: S3, Lamda, etc. They claim to support 80 different services which is… nuts. They’ve got a strange licensing model though, which last time I used it meant that they support some of the more common services for free, but if you want more you gotta pay… and they aren’t cheap. I don’t know if anything like this exists for Azure.
The next-best choice is to use a stand-in. Many cloud services are just managed+branded Free software projects. RDS is either PostgreSQL or MySQL, ElastiCache is just Redis, etc. For these, you can just stand up a copy of the actual service and since the APIs are identical, you should be fine. Where it gets tricky is when the cloud provider has messed with the API or added functionality that doesn’t exist elsewhere. SQS for example is kind of like RabbitMQ but not.
In those cases, it’s a question of how your application interacts with this service. If it’s by way of an external package (say Celery to SQS for example), then using RabbitMQ locally and SQS in production is probably fine because it’s Celery that’s managing the distinction and not you. They’ve done the work of testing compatibility, so theoretically you don’t have to.
If however your application is the kind of thing that interacts with this service on a low level, opening a direct connection and speaking its protocol yourself, that’s probably not a good idea.
That leaves the third option, which isn’t great, but I’ve done it and it’s not so bad: use the cloud service in development. Normally this is done by having separate services spun up per user or even with a role account. When your app writes to an S3 bucket locally, it’s actually writing to a real bucket called companyname-username-projectbucket
. With tools like Terraform, the fiddly process of setting all this up can be drastically simplified, so it’s not so bad – just make sure that the developers are aware of the fact that their actions can incur costs is all.
If none of the above are suitable, then it’s probably time to stub out the service and then rely more heavily on a QA or staging environment that’s better reflective of production.
Having used it for work, I really don’t understand the appeal, especially when compared to tools like Poetry. Uv persists in the dependency on requirements.txt, doesn’t streamline the publishing process, and contrary to the claims, it’s not a drop-in replacement for pip, as the command line API is different.
It’s really fast, which is nice if you’re working on a nightmare codebase with 3000 dependencies, but most of us aren’t, and Poetry is pretty damned fast.
If uv offered some of what Poetry does for me, if at the very least we could finally do away with requirements.txt and adopt something more useable – baked into pyproject.toml of course – then I’d be sold. But this is just faster pip.
Maybe I’m misunderstanding vulture. My impression was that it’s meant to be run in your CI, which would mean it’s only privy to code executed by your tests. If it actually attached to production sessions, then yeah that’s pretty handy.
If you ensure 100% test coverage, you don’t need this ;-)
My thoughts exactly. What I want is Poetry’s workflow and use of pyproject.toml
baked into Python.
The easiest & cheapest option would be to expose one of the devices to the internet on a known port and connect from the other device to that one with SSH.
Once you’ve got a connection, you can do pretty much anything you want, including writing to a pipe or even a file and polling it.
If you don’t want to expose either, then you need a third party to facilitate the connection that is on the Open internet, though that server can be yours too. Even a €4/mo box at Hetzner would do it.
If these options sound good, let me know and I can be more detailed.
I learned Python by starting with a project and then seeking out tutorials for helping with that subject.
My project was a simple website, so the Django Project’s official tutorial was where I spent most of my time. These days it’s still excellent, and there’s now Django Girls who host a larger set of tutorials as well.
Maybe you’re not interested in web stuff, and that’s fine. My advice would be to figure out what you want to learn more specifically and look for tutorials for that. If nothing else, it’ll make the learning process more interesting.
Congratulations! That feeling of “I built a thing!” never gets old. If you’d like some help troubleshooting or would like a coffee review, feel free to post a link to the source here.
Not to mention the fact that the densities of the materials differ, so you’d have to also factor in material density into the conversion. 100ml of sugar (denser molecule) is much heavier than 100ml of flour.
If it’s a path, you should use Path
. If it’s a regular expression, define it as such with re.compile()
. If the purpose of such a module is to reduce boilerplate, then defining these values as strings only necessitates boilerplate later on to convert them.
There’s one edge case to consider, though it’s very “edge”. If you define a very long/complex regex as a constant and then import from that file, Python will run .compile()
on import. If you’re not actually using that regex though (say you imported the file to use a different value) then you’re burning CPU there for no reason.
I generally don’t recommend that you concern yourself with that sort of thing though until you run into real performance problems. Most regexes you compile take no time at all, and the benefit of storing everything as the object you’d expect has big benefits for developer cognitive load.
The docs are pretty great… once you’re deep into it and understand the stuff it glazes over. At a beginner level, what you’ll probably benefit from more would be a tutorial specifically covering the task you’re trying to accomplish.
Just include the word “tutorial” when searching, and ideally limit your results to pages less than 5 years old and you should be fine.
But there’s nothing stopping you from loading realistic (or even real) data into a system like this. They’re entirely different concepts. Indeed, I’ve loaded gigabytes of production data into systems similar to what I’m proposing here (taking all necessary precautions of course). At one company, I even built a system that pulled production into a developer-friendly snapshot while simultaneously pseudo-anonymising that data so it can be safely (for some value of ${safe}) be tinkered with in development.
In fact, adhering to a system like this makes such things easier, since you don’t have to make any concessions to “this is how we do it in development”. You just pull a snapshot from the environment you want to work with and load it into your Compose session.