For my APSA paper this year, I had to merge two survey data sets, and do a bunch of work with imputation and instrumental variables. To complicate things, I was working with a student on the merge.
Here are some lessons learned:
1. Agree on an outline before doing the data work.
Know the main steps going in, and write them as comments in the code. This will make the logic of the data work more clear, which is helpful for debugging. For long merges, it can be motivating to see your progress as you go along. Also, if the steps need to change, it's good to realize why the original plan didn't work.
2. Stick to an intuitive naming convention for variables, data sets, etc.
Yes, you'll have to type out longer variable names, but you'll save a ton of time in debugging. For instance, "october_cleaned_dataset" instead of "d2.10c".
3. Store files as .csv, not .dat.
R's "save" command isn't fully compatible across versions and operating systems. Instead, save small-to-moderate datasets as .csv files. That way, compatibility won't be an issue. (Plus you can open the same files in other programs, like STATA or excel.)
4. Use names, not numbers.
Don't use commands based on row or column numbers: data_set[,c(1:4,7,8,23)]. They're hard to read, and they're brittle: if a new column gets inserted in data_set, that code isn't going to work anymore. Instead, use variable names: data_set[,c("time1","time2","time3","time4","age","gender","year")]. Regular expressions can be very helpful here: data_set[,append(grep("time.", names(data_set)), c("age","gender","year"))]. Yes, it takes longer to write, but you'll save a huge amount of time debugging.
No comments:
Post a Comment