Skip Navigation

Any tips to help a scientist become a better programmer?

Hey there!

I'm a chemical physicist who has been using python (as well as matlab and R) for a lot of different tasks over the last ~10 years, mostly for data analysis but also to automate certain tasks. I am almost completely self-taught, and though I have gotten help and tips from professors throughout the completion of my degrees, I have never really been educated in best practices when it comes to coding.

I have some friends who work as developers but have a similar academic background as I do, and through them I have become painfully aware of how bad my code is. When I write code, it simply needs to do the thing, conventions be damned. I do try to read up on the "right" way to do things, but the holes in my knowledge become pretty apparent pretty quickly.

For example, I have never written a class and I wouldn't know why or where to start (something to do with the init method, right?). I mostly just write functions and scripts that perform the tasks that I need, plus some work with jupyter notebooks from time to time. I only recently got started with git and uploading my projects to github, just as a way to try to teach myself the workflow.

So, I would like to learn to be better. Can anyone recommend good resources for learning programming, but perhaps that are aimed at people who already know a language? It'd be nice to find a guide that assumes you already know more than a beginner. Any help would be appreciated.

70 comments
  • Read your own code that you wrote a month ago. For every wtf moment, try to rewrite it in a clearer way. With time you will internalize what is or is not a good idea. Usually this means naming your constants, moving code inside function to have a friendly name that explain what this code does, or moving code out of a function because the abstraction you choose was not a good one. Since you have 10 years of experience it's highly possible that you already do that, so just continue :)

    If you are motivated I would advice to take a look to Rust. The goal is not really to be able to use it (even if it's nice to be able able to write fast code to speed up your python), but the Rust compiler is like a very exigeant teacher that will not forgive any mistakes while explaining why it's not a good idea to do that and what you should do instead. The quality of the errors are crutial, this is what will help you to undertand and improve over time. So consider Rust as an exercice to become a better python programmer. So whatever you try to do in Rust, try to understand how it applies to python. There are many tutorials online. The official book is a good start. And in general learning new languages with a very different paradigm is the best way to improve since it will help you to see stuff from a new angle.

  • Most of the "conventions" (which are normally just "good practices") are there to make the software easier to maintain, to make teamwork more efficient, to manage complexity in large code-bases, to reduce the chance of mistakes and to give a little boost in productivity.

    For example, using descriptive names for variables (i.e. "sampleDataPoints" rather than "x") reduces the chances of mistakes due to confusing variables (especially in long stretches of code) and allows others (and yourself if you don't look at that code for many months) to pick up much faster what's going on there in order to change it. Dividing your code into functions, on the other hand, promotes reusability of the same code in many places without the downsides of copy & paste of the same code all over the place, such as growing the code base (which makes it costlier to maintain) and, worse, unwittingly copying and pasting bugs so now you have to fix the same stuff in several places (and might even forget one or two) rather than just fixing it in that one function.

    Stuff at a higher, software design level, such as classes, are mean to help structure the code into self-contained blocks with clear well controlled ways of interaction between them, thus reducing overall complexity (everything potentially connecting to everything else is the most complex web of connection you could have) increasing productivity (less stuff to consider at any one point whilst doing some code, as it can't access everything), reduce bugs (less possibility of mistakes when certain things can only be changed by only a certain part of the code) and make it easier for others to use your stuff (they don't need to know how your classes works, only to to talk to them, like a mini library). That said, it's perfectly feasible to achieve a similar result as classes without using classes and using scope only, though more advance features of classes such as inheritance won't be possible to easilly emulate like that.

    That said, if your programs are small, pretty much one use (i.e. you don't have to keep on using them for years) and you're not having to work on the code as a team, you can get away with not using most "conventions" (certainly the design level stuff) with only the downside of some loss in productivity (you lose code clarity and simplification, which increases the likelihood of bugs and makes it slower to transverse and spot stuff in the code when you have to go back and forth to change things).

    I've worked with people who weren't programmers but did code (namelly with Quants in Finance) and they're simply not very good at doing what is but a secondary job for them (Quants mainly do Mathematical modelling) which is absolutelly normal because unlike with actual Developers, doing code well and efficiently is not what their focus has been in for years.

  • Do you want to work as a developer? Or do you want to want to continue with your research and analysis? If you're only writing code for your own purposes, I don't know why it matters if it's conventional.

    • I guess if you are unlikely to go back and change it, or understand how it works, then sure. And yeah that happens.

      I write scripts and utilities like that. Modularity is overkill although I do toss in a comment or two to give a hint to future me, just in case.

      Although tbf, I took plenty of CS classes and some of the instructors beat best practices into our heads... So writing sloppy, arcane, spaghetti code causes me to flinch...

  • Learn Haskell.

    Since it is a research language, it is packed with academically-rigorous implementations of advanced features (currying, lambda expressions, pattern matching, list comprehension, type classes/type polymorphism, monads, laziness, strong typing, algebraic data types, parser combinators that allow you to implement a DSL in 20 lines, making illegal states unrepresentable, etc) that eventually make their way into other languages. It will force you to learn some of the more advanced concepts in programming while also giving you a new perspective that will improve your code in any language you might use.

    I was big into embedded C programming years back ... and when I got to the pointers part, I couldn't figure out why I suddenly felt unsatisfied and that I was somehow doing something wrong. That instinct ended up being at least partially correct. I sensed that I was doing something unsafe (which forced me to be very careful around footguns like pointers, dedicating extra mental processes to keep track of those inherently unsafe solutions) and I wished there was some more elegant way around unsafe actions like that (or at least some language provided way of making sure those unintended side effects could be enforced by the compiler, which would prevent these kinds of bugs from getting into my code).

    Years later, after not enjoying JS, TS (IMO, a porous condom over the tip of JavaScript), Swift, Python, and others, my journey brought me to FRP which eventually brought me to FP and with it, Haskell, Purescript, Rust, and Nix. I now regularly feel the same satisfaction using those languages that I felt when solving a math problem correctly. Refactoring is a pleasure with strictly typed languages like that because the compiler catches almost everything before it will even let you compile.

  • While there are lots of programming courses out there, not many of them will explicitly teach you about good programming principles. Here are a couple things off the top of my head:

    • High cohesion, low coupling. That is, when you divide up code into functions and classes, try to minimize the number of things going between those functions (if your functions regularly have 6+ arguments, that's a red flag and should be reviewed). And when something needs to be broken up into pieces, try to find the spots where there are minimal points of contact.
    • Try to divide code between functions and files in a way that doesn't feel too busy. If there are a bunch of related functions that are cluttering up one file, or that are referenced from multiple places, consider making a module for those. If you're not sure what "too busy" means...
    • Read a style guide. There are lots of things that will help you clean up and organize your code. The guide won't necessarily tell you why to do each thing, but it's a great tool when you don't have another point of reference.

    If you have a chance to take a "Software Engineering 101" class, this is where you'd learn most of the basic principles for writing better code.

  • If you don't already, use version control (git or otherwise) and try to write useful messages for yourself. 99% of the time, you won't need them, but you'll be thankful that 1% of the time. I've seen database engineers hack something together without version control and, honestly, they'd have looked far more professional if we could see recent changes when something goes wrong. It's also great to be able to revert back to a known good state.

    Also, consider writing unit tests to prove your code does what you think it does. This is sometimes more useful for code you'll use over and over, but you might find it helpful in complicated sections where your understanding isn't great. Does the function output what it should or not? Start from some trivial cases and go from there.

    Lastly, what's the nature of the code? As a developer, I have to live with my decisions for years (unless I switch jobs.) I need it to be maintainable and reusable. I also need to demonstrate this consideration to colleagues. That makes classes and modules extremely useful. If you're frequently writing throwaway code for one-off analyses, those concepts might not be useful for you at all. I'd then focus more on correctness (tests) and efficiency. You might find your analyses can be performed far quicker if you have good knowledge about data structures and algorithms and apply them well. I've personally reworked code written by coworkers to be 10x more efficient with clever usage of data structures. It might be a better use of your time than learning abstractions we use for large, long-term applications.

  • My advice comes from being a developer, and tech lead, who has brought a lot of code from scientists to production.

    The best path for a company is often: do not use the code the scientist wrote and instead have a different team rewrite the system for production. I've seen plenty of projects fail, hard, because some scientist thought their research code is production level. There is a large gap between research code and production. Anybody who claims otherwise is naive.

    This is entirely fine! Even better than attempting to build production quality code from the start. Really! Research is solving a decision problem. That answer is important; less so the code.

    However, science is science. Being able to reproduce the results the research produced is essential. So there is the standard requirement of documenting the procedure used (which includes the code!) sufficiently to be reproduced. The best part is the reproduction not only confirms the science but produces a production system at the same time! Awws yea. Science!

    I've seen several projects fail when scientists attempt to be production developers without proper training and skills. This is bad for the team, product, and company.

    (Tho typically those "scientists" fail to at building reproducible systems. So are they actually scientists? I've encountered plenty of phds in name only. )

    So, what are your goals? To build production systems? Then those skills will have to be learned. That likely includes OO. Version control. Structural and behavioral patterns.

    Not necessary to learn if that isn't your goal! Just keep in mind that if a resilient production system is the goal, well, research code is like the first pancake in a batch. Verify, taste, but don't serve it to customers.

  • Check Udemy for courses and wait for a sale. They normally list for hundreds of dollars but routinely (pretty much monthly) for about $10 - $15 dollars.

  • Learning new programming languages is an awesome way to expand your programming brain. If you want to stay in the same scientific computation niche, you can check out Julia or Mathematica. If you’re just looking to broaden your horizons, the world is your oyster. For me, learning Clojure really cooked my noodle but made me a much better programmer since it taught me functional programming.

    Also, just read other peoples code! You can learn the conventions that way. Though for you it would best to find other products within your niche, because I’m not sure if general web dev code would be super helpful.

    There are techniques that are broader than any single language’s conventions, and I think learning those are how you can improve. That’s hard to teach, though, and it comes from experience with a few different languages, in my opinion.

    And honestly, I can totally respect the “conventions be damned” attitude, because at the end of the day, you’re trying to make something that works, and if nobody else is reading that code, you’ve made the right trade-off.

  • Two books that may be helpful:

    • Fluent Python by Luciano Ramalho
    • Python Distilled by David M. Beazley

    I'm more familiar with the former, and think it's very good, but it may not give you the basic introduction to object oriented programming (classes and all that) you're looking for; the latter should.

  • As one physicist to another, the most important thing in the code are long variable names (descriptive) and comments.

    We usually do not do multi-people multi year projects, so all other comments in this page especially the ones coming from programmers are not that relevant. Classes are cool, but they are not needed and often obscure clarity of algorithmic/functional programming.

    S. Wolfram (creator of Mathematica) said something along these lines (paraphrasing) if you are writing real code in Mathematica - you are doing something wrong.

  • As the other commenter said, you want to learn about programming principles. Like, low coupling or don't repeat yourself.

    How long is your longest program? What would you say is a typical length?

    You say your code is "bad" -- in what ways? For example:

    • Readability (e.g. going back to it months later so you go "oh I remember" or "wtf does this do?!"
    • Maintainability (go back to update and you have to totally rework a bunch of stuff for a change that seems like it should be simple)
    • Reliability (mistakes, haphazard "testing", can't trust output)
    • Maybe something else?
  • Computer scientist here. First, let me dare ask scientists here a question from a friendly fellow: do you have reference to your suggestions?

    Code Complete 2 is a book on software engineering with plenty of proper references. Software engineering is important because you learn how to work efficiently. I have been involved in plenty of bad science code projects that wasted tax payers money because of the naivety by the programmers and team management.

    The book explains how and why software construction can become expensive and what do about it, covering a vast range of topics agreed by industrial and academic experts.

    One caveat, however, is that theories are theories. Even best practices are theories. Often, a young programmer tries to force some practice without checking the reality. You know you can reuse your function to reduce chance of bugs and save time. But have you tested if that is really the case? Nobody can tell unless you test, or ask your member if that's a good idea. I've spent a good chunk of time on refactoring that didn't matter. Yet, some mattered.

    That importance of reality check is emphasized in the book Software Architecture: The Hard Parts, for example.

    Now, classes, or OOP, have been led by the industry to solve their problems. Often, like in case of Java, it was a partly a solution for a large team. For them it was important to collaborate while reducing the chance of shooting someone accidentally. So, for a scientific project OPP is sometimes irrelevant, and sometimes relevant. Code size is one factor to determine the effectiveness of OOP, but other factors also exist.

    Python uses OOP for providing flexibility (here I actually mean polymorphism to be precise), and sometimes it becomes necessary to use this pattern as some packages rely on it.

    One problem with Python's OPP is that it inherits implementation. Recent languages seem to avoid this particular type of OOP because the major rival in OOP, what is called composition, has been time-proven to be easier to predict the program's behavior.

    To me, writing Python is also often easier with OOP. One popular alternative to OOP is what is called a functional approach, but that is unfortunately not well-supported in Python.

    Finally, Automate the Boring Stuff With Python is a great resource on doing routine tasks quickly. Also, pick some Pandas book and get used to its APIs because it improves productivity to a great extent. (I could even cite an article on this! But I don't have the reference at hand.)

    Oh, don't forget ChatGPT and Gemini.

  • Use an IDE if you aren't already. Jetbrains stuff is great. Having autocomplete is invaluable.

  • Can anyone recommend good resources for learning programming

    Honestly? No. The best resource is you. Ask questions. Get experience. Ask questions. Get experience. Repeat.

    It's not enough to learn. You also have to do. And you really should learn by doing in this field.


    First of all - fuck Python. I'm sure it's possible to write good code in that language, but it's not easy and it requires a lot of discipline. I don't mean to be mean to Python, it's a truly wonderful language, arguably one of the best languages, when used properly. But it sounds like you're not using it properly.

    Pick a language that:

    1. Has static typing
    2. Does not do garbage collection

    Static typing forces you to have more structure in your code. You can have that structure in a dynamic language but nobody ever does in practice and part of the reason is all of the libraries and third party code you interact assume you have dynamic typing as a crutch to quickly and easily solve hard to solve problems.

    It's far better to actually solve those problems, rather than avoid them. You'll tend to create code where bugs are caught when you write the code instead of when someone else executes the code. That "you vs someone else" distinction is a MASSIVE time saver in practice. It took me about 20 years, but I have learned dynamic typing sucks. It's convenient, but it sucks.

    For more info: https://hackernoon.com/i-finally-understand-static-vs-dynamic-typing-and-you-will-too-ad0c2bd0acc7

    On garbage collection - it's a similar issue. It's really convenient to write code where "someone else" deals with all that memory management "garbage" for you but the reality is you should be thinking about memory when you write your code because, at it's heart, 99.999% of the code you write is in fact just moving memory around. Garbage collection is like "driving" a Tesla with autopilot active. You're not really driving at all. And you can do a better job if you grab that wheel and do it yourself.

    I recommend starting with a manually memory managed language (like RUST) to learn how it works, and then from there you might try a language that does most of the work for you without completely taking it out of your hands (for example Swift, which has "automatic" memory management for common situations but it's not a garbage collector and in some edge cases you need to step in and take over... a bit like cruise control in a car if we're going to use that analogy.

    It's getting harder these days to find a language that doesn't have garbage collection. The industry has gone decades thinking GC is a good idea and we just need one more fix, which we're working on, to fix that edge case where it fucks up... and then we find another edge case, and another, and another... it's a bit of a mess and entire papers have been written on the subject. But anyway some of the best and newest languages (Rust, Swift, etc) don't have Garbage Collection, which is nice (because writing code in C or Fortran sucks — I'm not recommending that).


    That's enough for now. Just keep muddling about learning those languages first before trying to tackle bigger problems. Programming is a difficult task, just like a baby learns to sit up, then roll over, then crawl, then stand, then walk with assistance, then stumble around, then walk, then run, then ride a bicycle with three wheels, then a two wheel one with no pedals, then a bicycle with pedals, then a car after that...

    You skipped all those steps and went straight to driving a car (with autopilot). To learn properly, you don't need to go all the way back to "sitting up and crawling", but you should maybe go back just a little bit. Figure out how to get code to run, at all, in a language like rust, get familiar with it.


    After you've done that come back here and ask what's next. We can talk about SOLID, Test Driven Development, all the intricacies of project management in git, exceptions vs returning an error vs failing silently, and when to use third party code vs writing your own (phew boy that's a big one...).

    But for now - just learn a lower level language. Programming is a bit like physics. You've got elements, and under that atoms, and under that... well I don't even know what's under that (you're the scientist not me). There are "layers" to programming and it's important to work at the right layer and it's also important to understand the layer below and above the one you're working at.

    If Python is at layer x, then you really need to learn layer x-1 in order to be good at Python. You don't need to go all the way down - you can't go all the way down (how do magnets work?).

70 comments