Perpetually Under Construction

#rstatsnyc as told by @brookLYNevery1, @dataandme, and autographs

|

EDIT: I’ve added the notes from @dataandme and linked to people’s twitter and slides (if I found them). This is probably going to be an ongoing process…


Another year and another talk at the NYC R Conference. As always, the conference was filled with excellent speakers (I’m biased here becuase I was one of them…), food, and people.

Brooke Watson (@brookLYNevery1) did a fantastic job illustrating and summarzing all of the talks. So I’ve just linked to all her tweets (after the break).

Oh… and I got my books signed :)

#rstatsnyc as told by #autographs. #rstats #nycdatamafia #python #datascience

A post shared by Daniel Chen (@chendaniely) on

Analysis-Based Project Templates

|

One of the most annoying things you hear people say when they are working with some common code base is “It works on my machine…”. Conversely, one of the more satisfying things is running a script that you are not actively working on and have it run without problems.

Project Templates are one way to address this problem. The original post about project templates mainly talks about the folder structure but not so much as the rationale behind why things are the way they are. Also the original post used a user-based subfolder structure under src, which caused some problems when we ended up doing code reviews.

Why do we even want to use “projects”? Software Carpentry has a good set of explanations. When dealing with working directories and workspaces within R and RStudio, even RStudio suggests using projects.

From VMs to LXC Containers to Docker Containers

|

Since I’ve joined SDAL, the lab has undergone a few infrastructure related changes, mainly how applications are run on the servers. From what I remember, we started using Virtual Box virtual machines, then moved to LXC Linux containers, and we are now rebuilding our entire infrastructure using Docker containers.

Project Templates

|

Project templates provide some standardized way to organize files. Our lab uses a template that is based off the Noble 2009 Paper, “A Quick Guide to Organizing Computational Biology Projects”. I’ve created a simple shell script that automatically generates this folder structure here, and there’s an rr-init project by the Reproducible Science Curriculum folks.

The structure we have in our lab looks like this:

project
|
|- data             # raw and primary data, are not changed once created
|  |
|  |- project_data  # subfolder that links to an encrypted data storage container
|  |  |
|  |  |- original   # raw data, will not be altered
|  |  |- working    # intermediate datasets from src code
|  +  +- final      # datasets used in analysis
|
|- src /            # any programmatic code
|  |- user1         # user1 assigned to the project
|  +- user2         # user2 assigned to the project
|
|- output           # all output and results from workflows and analyses
|  |- figures/      # graphs, likely designated for manuscript figures
|  |- pictures/     # diagrams, images, and other non-graph graphics
|  +- analysis/     # generated reports for (e.g. rmarkdown output)
|
|- README.md        # the top level description of content
|
|- Makefile         # Makefile, if applicable
|- .gitignore       # git ignore file
+- project.Rproj    # RStudio project

Changes in Higher Education

|

As a PhD student who already has a Master’s degree, it’s safe to say that I’ve been in school for a long time. One of the things in higher education that I started to dislike over the years are the ways professors assess students in the classroom.

NYC R Conference

|

Just got back from the 3rd annual NYC R Conference this past weekend. I have been honored to be one of the few speakers for the 3rd year in a row. This year’s talk, “So You Want to be a Data Scientist” gave a whirlwind tour of the tools and skills needed to be a Data Scientist. I conveyed all this information in 56 slides and did it in 20 minutes.

The day before the conference, I also ended up presenting my current work on behavior diffusion in social netowrks and a little of my other work to the NewYork-Presbyterian Hospital’s Value Institute. This was probably the more nerve wracking things I’ve had to do recently, presenting my research to a few extremely talented and smart PhDs doing health analytics for New York Presbyterian Hospital. But, we had a lot of good meaningful discussion during the talk, that went overtime and we were kicked out of the room. That has to be a good sign, right?

Preparing for the Summer

|

Working While Pursuing a PhD

|

My lab has extended me an opportunity to be a research scientist and helping out our current senior data scientist with the daily analytics and IT support the lab needs. It’s a very enticing opportunity, but I need to stop and think about my options

Education in the United States

|

As people start sharing their educational experiences from around the world, I realized how lucky I am to have been educated in New York City, and how much less the United States focuses on education when compared to many other countries around the world.

Open Access

|

“Open” has played in important role in my life the last few years. It all began when I was an attendee at a Software-Carpentry workshop back in 2013. Before then, I only knew about Open Access and Open Source, but wasn’t active in any Open community.

This week is Open Data Week at Virginia Tech, and it begins with an “Open Research/Open Data Forum [on] Transparency, Sharing, and Reproducibility in Scholarship”, which I was honored to be apart of.