Jupyter notebooks have significantly transformed my behaviour to coding for a very long time. Previously, I would carefully design algorithms before start coding. This process involved visualizing the concept, either in my mind or on a document, and gradually developing small, manageable chunks of code. My aim was always clear: create scripts that could potentially be integrated into larger products/code bases later.
However, my transition to intensely using Jupyter notebooks has introduced a new dynamic. Now, my primary focus is on generating outputs and seeing results immediately. Despite my efforts to properly review the code, the urge to run a cell and instantly display outputs, like using df.head()
to preview data, checking the content of a list, or verifying the shape of a numpy array, often overrides my previous work behaviour. This method, although insightful, tends to slow down my overall workflow, or at least, it feels like it does.
I am not a diciplined person by nature. My work is often messy, and I tend to jump from one task to another. I force myself to be more organized by applying some tricks to my brain. My self-discipline is continually tested when using Jupyter notebooks.
One practical issue that has become apparent involves the reusability of scripts. In a traditional setup, I could easily reuse scripts in different contexts. Now, because the scripts are embedded in notebook files, reusing them isn’t straightforward. Although Visual Studio Code allows me to open these notebooks, accessing them from a terminal or on a server requires initializing a Jupyter session, an additional step that complicates my workflow.
Despite these challenges, I still appreciate how Jupyter notebooks help me quickly test and see changes in my code. They make experimenting with new ideas easier. However, I know I need to find a balance between the two approaches to maintain my productivity. I am still figuring out how to do this.