paint-brush
How-to Declutter Your Data Science Workspaceby@rick-bahague
307 reads
307 reads

How-to Declutter Your Data Science Workspace

by Rick BahagueAugust 5th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Working on a data science project is almost always equivalent to an amazing clutter in the working directory. Data scientists would most likely have the following materials dumped in their project working directory: Python/R scripts, data sets, journal articles, references, notebooks, scripts, notebooks and other references. The directory heirarchy is organized as repo/src/python/main/R, repo/source/py/lib (for utilities), repo/s/lib/main (for scala codes) An ansible-playbooks are created to automated repeatitive tasks.

Company Mentioned

Mention Thumbnail
featured image - How-to Declutter Your Data Science Workspace
Rick Bahague HackerNoon profile picture

Working on a data science project is almost always equivalent to an amazing clutter in the working directory. Data scientists would most likely have the following materials dumped in their project working directory:

  • Python/R scripts
  • Data sets
  • Reference materials
    — includes journal articles, slides, other documents
  • Notebooks
  • Notes
  • Scala sources (if using spark)
  • Cloned repository of other projects relevant to the current work
    — usually, a source of inspiration, methodology or case studies
  • Other scripts
    for data transfer, data clean-up or even for runners.sh to submit jobs on a cluster. I always have a runner.sh that contains yarn settings for spark-submit
  • Other stuffs, thought to be useful but oftentimes are not.

Given a project, Data Scientists follows these steps to tackle it;

Workflow of data science work

  1. Requirements gathering
  2. ETL data from sources using python, R or scala
  3. Data calibration
     - perform descriptive statistics on data to validate whether it reflects business facts. This takes sometime, even on collaborative environment where business and data scientists are working closely. In addition, data calibration is also needed to further verify business facts.
  4. Data Science and Insights Generation
     - with data validated and calibrated, A Data Scientist can now start working on generating insights - producing notebooks, scripts or scala jars. Notes, journal articles and other references will add to the clutter in the working directory.
  5. Visualization and Reports creation
     - reports for business are consolidated in a presentation from outputs of various visualization tools (png files, tableau workbooks)
  6. PySpark or Spark jobs sources for operationalization 
    - if the study is to be operationalized, prototypes are built as Data Engineers guide.
  7. Different activities necessary for the above steps, inevitably clutters the project directory.

De-clutter working directories

This is the directory heirarchy I have for every data science project:

  • ansible-playbooks:
     ansible playbooks are created to automated repeatitive tasks
  • data:
     all data sets (toy, final, intermediate aggregates, etc). I would usually have to sub directories, for (1) datasets generated in the cluster (we're running on a spark environment), (2) locally generated
  • Notebooks:
     with subdirectories for notebooks running on the cluster and locally
  • References:
     pdfs, journal articles, referencesrepo: for all python, scala and R scripts, organized as repo/src/python/main/R, repo/src/python/lib (for various utilities), repo/src/main (for scala codes). 
  • repo
     is organized like this to allow easy compilation of scala codes using maven build.
  • Reports:
     all reports goes here


Version Control

I use git to manage versions and changes. A .gitignore file which ignores everything except for the main directories above keeps accidental inclusion of files not intended for commit to the remote repo.

Here’s my .gitignore file.

/*
**/.DS_Store
**/.ipynb_checkpoints
**/*.log
repo/src/python/lib/
!/resources
!/notebooks
!/repo
!/ansible
!/data
!/.gitignore

Tools

  • Ansible: I am using ansible for automating repeative scp and spark submit on a cluster client. For simpler tasks, this may not be neccessary. Read about it here.'
  • Git: for version control
  • Sublime: for text editor
  • Anaconda3: as python distribution with jupyter-notebook
  • Markdown cheatsheet: for any documentation using sublime.
  • I’ve recently added, AirBnB's knowledge-Repo for knowledge-sharing with collegues. You can read more about it here. ​​

End Notes

How are you de-cluttering your working directory? Get the workspace template here. Feel free to comment and improve.

References:

Banner Image source: https://hortonworks.com/products/partner-solutions/data-science/