Turning Footnotes into Endnotes …

Submitting (LaTeX) written papers has some advantages: one usually does not have to change any formatting manually. LaTeX does that for us if it is instructed to do so. Journals often want the submissions to be formatted in a peculiar way. My challenge was to change footnotes to endnotes. I did remember that it was easy in LaTeX, but not how it was done, so Google had to help me …
Continue reading “Turning Footnotes into Endnotes …”

Two ways to get your LaTeX document to Word

Some LaTeX afficiendos might think … WHY? … but there is always a reason to transform your nicely layouted LaTeX file into something that resembles it in word. Journals that do not accept LaTeX files or PDFs as a submission is one, and probably the most important (and annoying) for researchers.

Here is a solution that seems to work. Use the LaTeX generated PDF and transform it into a Word file, using the following link:

http://www.pdftoword.com/

I tried it and it works nicely, even with some (basic in my case) formulas.

An alternative prodecure that seems to work well is Grindeq which also allows to transform LaTeX documents to Word:

https://grindeq.com/index.php?p=latex2word&lang=en

Edit: there is an alternative procedure by Tyler Ransom posted on Github specifically for Journal of Human Resources publication (follow the link here)

Preamble in do-files

When writing a lot of do-files during a research process it is hard to keep track of what a do-file was for, what it needs in terms of input, and what it generates in terms of output. Especially, if you get your paper back from the (journal) referees with comments what you should change, and want to re-run some part of the analysis — a year after you have done it –, it is hard to remember exactly what you need to do.

I use a preamble in my do-files to document (somewhat) this information, but also to set a couple of standard pointers that make my work easier …
Continue reading “Preamble in do-files”

Log-files

Log-files are important in the workflow for two reasons:

  1. Most importantly they keep track of any messages that are “non-fatal”, i.e. that do not stop the progress of the do-file. However, quite often you want to ignore those messages, unless you expect an error to have occurred, then you search through your log files.
  2. Convenient is to use the log-file to collect only that part of the output (results) that you will actually need for your research project.

Continue reading “Log-files”

Putting a spell on data

For those of you working with spell-data (could be panel, but I am thinking more of event-history data), there is a great tool that you should be aware of. You can get it at SSC by typing the following command:


ssc install tsspell

What can it do for you? Well, as I said it puts a spell on your data. By giving the command
Continue reading “Putting a spell on data”

capture and nostop

In some instance the standard option that STATA stops whenever an error occurs is a (minor) annoyance. In one of my projects I was running the same set of regressions over several groups with loops. However, whenever STATA found a group that could not run the regressions it would stop, stating the error no observations. Similar things can happen when you select (sub)groups to run commands like summarize, tabulate etc.

One solution is to “capture” the command, so that any error that is returned does not stop the do-file. My favourite example for capture is the following statement that can be found in almost all of my do-files…:

Continue reading “capture and nostop”

Running regressions with similar sets of variables

Quite often we run variations on regressions, including or excluding (sets of) variables. Copy-pasting the regression and eliminating the variables to be excluded is one way, but given that we speak of sets of variables why not use locals to do the work for you:
Continue reading “Running regressions with similar sets of variables”

Repetitive tasks … let STATA do the work

Especially in the process of data preparation, but also when one runs whole sets of analysis, we start repeating commands and sets of commands for similar variables. For example in one of my projects, I had to process salary information, that was monthly in wide format, and for several reasons I could not use reshape:

I could have typed:
gen str10 v201_1=""
replace v201_1=c2 if c1=="201"
gen str10 v201_2=""
replace v201_2=c3 if c1=="201"
[...]
gen str10 v201_12=""
replace v201_12=c13 if c1=="201"

Continue reading “Repetitive tasks … let STATA do the work”

Why doesn’t this do-file run through …?

Quite often a do-file is written to run on various data-files that all seem to be the same. Say, you have done it for one year of a data-set and want to repeat the same for the subsequent years that you have data. Now, a check whether the data actually has some data (given your selection) is often a good idea, here is how I did it in a recent project:


count
assert `r(N)'>1

I am using the saved return local variable r(N) that STATA automatically generates after count. This is a need feature that you should consider for many other commands (try return list after your favourite command).

Continue reading “Why doesn’t this do-file run through …?”

Cleaning up messy (string) variables

Working on firm level data (again), I have the experience of cleaning up hundreds of different spelinngs of occupations that should eventually be categorized into a set of occupations that should only differ when actual different occupations are needed.Let me call the variable occupation.

34. slesar po remontu la
38. slesar po rem. la
44. slesar po rem. i obsluzh. vent. i kondicionirovaniya
54. slesar po rem. i obsluzh. ven. i kondicionirovaniya
146. slesar po rem. la
205. slesar po remontu agregatov
259. slesar po rem.agregatov
313. slesar po remontu kompressornyh ustanovok i oborudovaniya
343. slesar po remontu oborud

Wonderful mess, acutally only a minor part of the full data-set.
Continue reading “Cleaning up messy (string) variables”