Some time ago i have searched for a guide like this, and since i couldn’t find one, I hope to help others in a similar situation by providing some guidance.
After all, this is my opinion, it is biased, it might be full of shit, please point it out, but this is what helped me the past few years and still is kind of what i do on a day to day basis to get my job done and people happy.
So long, enough of the babbling, let’s get to it.
I have to start somewhat further to explain my situation.
I’m a solo developer in a research institute in germany. On a daily basis we produce radiopharmaceuticals for human application aswell as pre-clinical and in-vitro studies.
The first project i have been assigned was to write a software that controls the self-built synthesis-devices. Since I came from a .Net background before starting my college studies and apprenticeship, the natural thing for me was to find out how to do this in .Net. We use interface hardware from National Instruments, so I’m using their dedicated interfaces provided by Measurement Studio. It’s not the best in the world, but it works pretty solid and we haven’t had a failure on that end yet in the past 3 years since deployment, so I’d recommend that if you need some interface hardware to get your job done as in control valves, heating and more.
As time passes by, you will get to know more about the users, the applications, how everything is connected, what happens when and why.
You don’t need to use the latest framework
Something that is quite hard to understand is that you don’t always need to use the latest framework. Especially in a controlled environment like the one i’m working in (at least partially), you have to take every dependency into consideration.
I write software that is business critical. If my software fails, I WILL get called shortly after if not during the synthesis. We don’t have any 1st, 2nd or 3rd level support, I am the one who does this. This means i have to take into consideration every aspect of the software that could cause future problems and try to avoid them as much as possible.
To give you an example of this:
One of the requirements we have for our production and quality control is that every step during a synthesis is recorded (if you want to learn more about this, search for Annex 11, GMP and FDA guidelines). That means not only the fully automated parts, but also every user interaction that happens. Since we want to have our data as easily readable as possible, I used RabbitMQ as a message broker to transfer data that has been aquired at the respective clients to our central server. There is a .Net application that persists the data into a database and gives the users the possiblity to access this data. They are presented the data in the form of time-series and an ordered log.
Sure, I could be commiting the data from the client directly into the database, but in case of a network outage, we couldn’t produce radiopharmaceuticals, and since network it out of my hand (regarding that aspect) I’d have to explain that to my boss. So RabbitMQ caches the data locally and transfers them whenever the connection can be established successfully. Plus it handles making sure each message gets only transferred once.
This system works so perfectly, that we recently connected one of the demo PCs we used at a presentation in october last year to the network again and all the data was transferred in a matter of minutes.
I could have used any other message broker for this purpose, but RabbitMQ „advertises“ with it’s reliability and so far i can only agree with it.
Learn new technologies when the right time comes
Whenever something new is presented at hackernews or reddit, the community immediately embraces it. At least it feels like this. This is not to say this is a bad thing, but as mentioned previously, in my case I have to be even more careful about what dependencies I will want to use in my applications. Will it be supported for years to come? How long has it been in use? Are there success stories of businesses using it?
I wanted to learn about websockets, so i thought about a place that could be useful to get realtime informations and I extended the Web-Application side of my software to use websockets to send the data in real-time to clients watching a report of a synthesis. If it doesn’t work, nothing breaks, it just doesn’t work, anyone can live without having realtime informations about the syntheses going on. If it works it allows someone in our network to check the progress.
I could have used InfluxDB or any other time-series specific databases available nowdays, but what benefit would i gain from it, besides some performance? I decided to use PostgreSQL as our main datastore. With the correct indexes in place, the performance is enough for our purpose. The users don’t care if the report is generated a second earilier or later, they need their data, they need to get it reliably, that’s what the system does, and has been doing for the past years.
What about containers? Docker is everywhere
Yep, that’s right, it’s hard to find a place without docker this day, but does it make everything easier? Does it automatically solve problems?
No it doesn’t.
Since I’m the one who has to support the underlying hardware aswell as the software it is running, I want a system that i can explain to someone while drinking a beer, not one where i need a whole datacenter and multiple abstraction layers before getting to the core of my application.
Not everyone is running at google/netflix/amazon scale. Sure i’d love to have a cluster that automatically heals and is resilient against any kind of problem that could arise, but if that requires me to get into more and more and even more concepts, the system is not for me. Plus every euro spent on anything IT-related is a euro not going into research directly. After all, we do research, we publish our work, and at the end of the day, that’s how we’re measured. That doesn’t mean we don’t spend money on hardware/software, but rather that we take older hardware that will handle our usecase perfectly fine instead of buying newer things that would be more expensive and idle 90% of the time.
Implement your own best practices
Often best practices are based on large teams, multiple developers working on the same project, handling different parts of it. Let it be agile that was hyped over and over, let it be TDD or the current trend of serverless or electron apps.
After all, you will be the one supporting what you’re writing.
I’m fine without TDD. If i write code, I test it. I test it manually, I deploy it on a test system, that is ok without 100% uptime, i will let the users test it. Not any users, the users who will be using the software later. I ask for their feedback and encourage them to tell me what could be wrong and if they have any better idea of solving a problem. I’m the one translating that into code and making it the way that they can use it easily to fullfil their jobs.
Agile is a good tool, if applied properly, but in a scenario where i’m the one doing the work, I don’t need agile. Whenever a new requirement arises, i put in in a tracker in gitlab or write it on a post it on my desk. I can see it all the time there, I can work on i, but why would i want to present any charts of burndown to myself? And my Boss couldn’t care less about that. He cares that the systems work the way that everyone can do their job.
Sometime apply common best practices
Some best practices exist for a reason. Version control is one of the most important things in software development. Even if you do linear version history without feature branches and feature flags, commit it, push it, you have a backup of your work readily available with a single command. Nowadays there is simply NO reason to lose any code ever written.
Code conventions is another example. There are great tools to assist with this, e.g. ReSharper. First thing on a new machine for me after installing Visual Studio is to install ReSharper. I’m used to it, I like it, it will complain when something isn’t looking the way it’s supposed to be looking, just click on whatever it’s complaining about and accept it. Your code will be better. As simple as that.
This is not about subscribing to my newsletter, this is about staying informed.
You can only improve as a person and a developer if you stay informed. Try new things, learn about current technologies, read books and blogs. I get 90% of the really interesting facts from blog articles i read. Interesting as in relevant for me personally or my job.
This year I began reading more books. The last book before was in school I guess. This year alone i read a nice Thriller (Daemon written by Daniel Suarez, totally recommend), a great programming book (The Clean Coder by Robert C. Martin) and an interesting psychology book (Predictably Irrational by Dan Ariely).
Standstill = Regression
therefore find some interesting books and read them. Or Articles. Or find a nice toy project that could be fun, such a home automation. Monitor room temperature and humidity with a raspberry pi, build a smart mixer, just do whatever you might be interested in.
Toy projects help me get new ideas and think outside the box.
Do what you love
This one might be the most important point of my list, and that’s why it’s the last. Our industry develops more and more in a direction where we’re supposed to be coding in our spare time to have a portfolio to present on github so we can apply with it to potention jobs. If you want to do this, it’s perfectly fine, but if you don’t, that’s fine too. Do what you love to do in your free time to get a clear mind. The mind is our most important organ, without it we can’t do our job.
If you want to spend your time riding your motorcycle, do it. If you want to get into photography, do that. Only if you do what you love, you will be at your peak performance, and that includes not permanently thinking about programming.