paper - Fernanda Leite Lopez de Leon

The Role of Conferences on the Pathway to
Academic Impact: Evidence from a Natural
Experiment
Fernanda L. L. de Leony Ben McQuillinz
March 2015
Abstract
Though conferences are a prominent feature in academic life, there is a
notable de…ciency of existing evidence for their e¤ectiveness in promoting
academic work. We provide such evidence, using a ‘natural experiment’: the
last-minute cancellation – due to ‘Hurricane Isaac’ – of the 2012 American
Political Science Association (APSA) Annual Meeting. Having assembled a
dataset containing 15,624 conference articles, we quantify conference e¤ects
on articles’visibility using a di¤erence-in-di¤erences framework. Our …ndings
indicate that, on average, articles gain 14-26 downloads in the …rst year
after being presented in a conference, and that this advantage continues
to accumulate thereafter. The conference bene…ts are largest for authors
a¢ liated to lower tier institutions and for scholars in the early stages of their
career. Our …ndings are robust to several tests.
This research is funded by the Leverhulme Trust (grant RPG-2014-107). We are grateful
for useful inputs from Steve Coate, David Hugh-Jones, Arthur Lupia, Judit Temesvary, Fabian
Waldinger, and the seminar attendance at the University of East Anglia, Kent and Portsmouth.
Excellent research assistance was provided by Chris Bollington, Raquel Campos-Gallego, Ben
Radoc, Arthur Walker and Dalu Zhang.
y
School of Economics. University of Kent. E-mail: f.de-leon@kent.ac.uk
z
School of Economics. University of East Anglia. E-mail: b.mcquillin@uea.ac.uk
1
JEL Classi…cation: O39, I23, L38
Keywords: e¤ects of conferences, di¤usion of scienti…c knowledge
1
Introduction
Conferences feature prominently in the dissemination strategies for most
academic projects, and academics generally invest a signi…cant fraction of the
time and resources available to them in attending (or organising) such events.1 It
is therefore striking that there is little existing scienti…c evidence for, or direct
measurement of, the e¤ectiveness of conferences in promoting the visibility of
academic work. In this paper we address this important de…ciency, by estimating the
causal e¤ects of a speci…c conference - the American Political Science Association
(APSA) Annual Meeting - on the visibility of academic papers presented therein.
The APSA meeting is one of the largest and more important political science
conferences, gathering close to 3,000 presenters every year, from more than 700
institutions. We utilise a ‘natural experiment’: the cancellation - due to ‘Hurricane
Isaac’, at 48 hours’notice - of the 2012 meeting, which was scheduled to take place
in New Orleans. By the time of this cancellation, the conference program had been
fully arranged and was compositionally indistinguishable from previously occurring
editions. There was therefore a unique opportunity to identify conference e¤ects.
We assembled a new dataset comprising 15,624 conference papers scheduled to be
presented between 2009 and 2012, and we matched these to outcomes collected
from the Social Science Research Network (SSRN), including numbers of articles’
downloads and downloads of conference authors’other work.2
To quantify conference e¤ects, we adopt a di¤erence-in-di¤erences approach,
using a comparator large meeting - the Midwest Political Science Association
1
The American Economic Association advertised close to 300 meetings in 2014, and in the …eld
of medical science there is an estimated 100,000 meetings per year (Ioannidis, 2012).
2
We also matched the conference papers to citation counts collected from Google Scholar, but we
did not …nd sizeable conference e¤ects on this outcome observed after two years of the conference.
It seems likely that an e¤ect on citations will take much longer to manifest than an e¤ect on
downloads.
2
(MPSA) - in the same discipline that was not cancelled. We examine how outcome
patterns change in 2012 (…rst di¤erence) in the APSA versus in the MPSA (second
di¤erence) meeting. We detect large and statistically signi…cant conference e¤ects
in determining articles’visibility. On average, articles gain 14-26 downloads in the
…rst 15 months after being presented in the conference, and the advantage continues
to accumulate –to a gain of 17-30 downloads –in the next 12 months thereafter.
The existence of conference e¤ects is also con…rmed in a separate di¤-in-di¤
analysis, in which we estimate the gains generated speci…cally by the audience
for a paper within the conference.
Our test is based on the hypothesis that
the size of the audience in a session a¤ects the number of articles’ prospective
downloads. The session attendance per se is not observed, but instead we use a
constructed measure, of ‘expected audience’, based on information provided in the
APSA program. In creating this variable, we assume that an article’s audience
depends positively on the number of other conference papers in the same theme
(the idea being that participants sort into attending sessions that are closely related
to their own work) and negatively on the number of articles in this same theme being
presented in parallel (because these are competing for the same time-slot audience).
If attendees download articles they see in presentations during the conference, papers
being presented in well-attended sessions should gain more downloads than articles
presented in poorly attended ones.
Articles with a larger expected conference
audience are then expected to be the ones more negatively a¤ected by the 2012 APSA
meeting cancellation. Our results con…rm this hypothesis, and indicate that every 13
‘expected audience’members generate one download of the article presented in the
15 months following the conference. We present several econometric speci…cations
and robustness checks to ensure the validation of our identi…cation strategy: i.e.
that we are not capturing other factors such as unobservable heterogeneity related
to articles’ download prospects or changes in the profession’s demand/supply for
research themes, instead of conference e¤ects. Then, in other speci…cations, we
consider other possible correlates for session attendance. Just as papers with a
lower ‘expected audience’ measure (as described above) were less a¤ected by the
2012 cancellation, we …nd the same is true for papers that were allocated to the …rst
3
session of the meeting (which is often perceived as ill-attended, because participants
are still registering) and for papers scheduled to be presented in competition (in a
di¤erent session, but in the same theme and same time-slot) with a paper presented
by a famous author.
Finally, we ask: who bene…ts more from presenting in conferences? Does a
greater gain accrue to already-established scholars or to less-known and newcomer
authors? The answer is not obvious. One supposition might be that conferences are
particularly valuable for less-established authors as a means to advertise their work.
A countervailing supposition might be that scholars with an existing reputation
bene…t by attracting large audiences within the conference, while less-known authors
…nd their presentations less-attended and therefore less e¤ective. In other words,
conferences could plausibly either mitigate or exacerbate any ‘famous-get-famous
e¤ect’.3 We examine conference e¤ects by authors’ a¢ liation and previous SSRN
publication status. The statistically signi…cant conference e¤ects are noticed for
articles authored by scholars with no previous articles posted in SSRN and by
scholars a¢ liated to institutions outside the Top 10. For authors in institutions
below the Top 100, we …nd weak evidence of a positive e¤ect on downloads of
other working papers posted shortly after the conference. These results suggest that
conferences increase the visibility not only of the presented articles, but also of their
authors’work more generally.
Our …ndings with respect of who bene…ts most from attending academic
meetings are novel within the existing literature. Our more general …ndings - that
conferences have positive e¤ects for presented papers –are consistent with previous
…ndings, but our results are derived from a more compelling identi…cation strategy.
Previous studies document the positive correlation between conference acceptance,
publication prospects, and number of citations (Winnik et al 2012; Galang et al
2011; Lee et al 2012; Toma et al 2006). However, in such analysis, one cannot
3
The ‘famous-get-famous’e¤ect is also sometimes known as the ‘Matthew e¤ect’, as discussed
by Merton (1968). Related reinforcement e¤ects are documented by Salganik et al (2006) in an
experiment in the music market, and by Oyer (2006) in a study of the relationship between initial
labor market conditions and long-term career outcomes for academic economists.
4
distinguish between the selection e¤ect (the extent to which the conference selects
for papers that are likely to have greater impact) and the conference e¤ect (the
extent to which the conference itself enhances a paper’s impact). Blau et al (2010)
adopt an experimental approach that is closer to ours. They analyse the e¤ects of a
mentoring workshop for female junior professors whose attendance is decided on the
basis of a randomized trial and they …nd that attending the workshop signi…cantly
increases junior scholars’chance of publication.4 However, one cannot tell whether
these …ndings are likely to be particular to the speci…c, somewhat unique design
and attendance pro…le of the workshop in question, or more general to academic
meetings. The workshop analysed by Blau et al had around 35 participants, while
the APSA meeting (almost 100 times larger) is at the other end of the spectrum of
conference scale. Yet the APSA meeting within-session experience may be viewed as
broadly similar to smaller conferences, and there is therefore an excellent prospect
that our …ndings are general to other academic meetings.
The remainder of the paper is developed as follows. In section 2, we explain the
data. In section 3, we present our results. In section 4 we conclude, noting parallels
between our …ndings and recent work on the economics of science.
2
Data
2.1
The American Political Science Association and the
Midwest Political Science Association Meetings
In investigating the e¤ect of conferences, our analysis focuses on a speci…c
conference:
the annual meeting organized by the American Political Science
Association (APSA). This is one of the largest conferences in the …eld of political
science, with close to 3,000 papers presented each year. It occurs in the last week of
August or the …rst week of September (always on the American Labor Day weekend),
and comprises four days of presentations of panels, posters, workshops, evening
4
The purpose of the workshop was to help attendees to build peer networks of junior faculty
working in similar area.
5
sessions and roundtables.
The 2012 APSA meeting was due to take place in New Orleans and was scheduled
to start on August 30. However, it was cancelled at less than 48 hours’notice due
to the approach of ‘Hurricane Isaac’.5 By the time of this cancellation, and indeed
well before any genesis of tropical cyclone Isaac itself,6 the conference program was
…nished and publicly available, listing articles that one may suppose were similar
to those in previous APSA meetings. Indeed, we …nd supporting evidence for this
supposition. The fraction of participants by institution is similar in the 2012 APSA
meeting and in the 2009-2011 APSA meetings. A mean test does not reject the
hypothesis, at the 10% level, that the fraction of participants from a given institution
is the same in the 2012 (cancelled) APSA meeting and in the 2009-2011 (occurring)
APSA meetings, for 82% of authors’ institutions. We are therefore able to use
the cancellation as a ‘natural experiment’ to estimate various ‘conference e¤ects’.
In the di¤-in-di¤ analysis reported in section 3.1 we use, as a baseline for APSA
articles, papers accepted at a comparator conference: the Midwest Political Science
Association (MPSA) Annual Meeting. The APSA and the MPSA are professional
associations of political science scholars in the United States are similar in terms
of academic prestige. They publish the two main leading journals in the …eld, The
American Political Science Review and The American Journal of Political Science,
respectively. The APSA and the MPSA meetings are the largest conferences in the
…eld of political science and are similar in pro…le, format and scale. Moreover, the
5
The following announcement, on behalf of the APSA President, Bingham Powell, was published
on August 29:
"A primary function of the association is to provide the highest quality meeting experience
possible. In light of revised information we have from local o¢ cials about the trajectory of Isaac,
we now anticipate the potential for sustained rain, ‡ooding, power outages and severely restricted
transportation into the city on Thursday. Under these circumstances, it is not prudent to convene
the meeting." http://www.apsanet.org/content_82576.cfm?navID=988.
6
The
synoptic
history
for Hurricane Isaac - see http://www.nhc.noaa.gov/data/tcr/AL092012_Isaac.pdf - traces back
to atmospheric trough that started developing west of Africa on August 16-17, and manifested to
a ‘tropical storm’by August 21. A state of emergency was declared for Louisiana on August 26.
6
MPSA conference takes place in April, …ve months before the APSA conference, so
there is no possibility that cancellation of the 2012 APSA meeting a¤ected in any
way the pro…le of papers at the 2012 MPSA meeting.
We focus on articles presented in panel sessions (which concentrate most of the
participants). In both meetings, panel sessions are 1 hour and 45 minutes long and
usually have four presenting papers, one chair and one or two discussants. The
two meetings have a similar registration fee, and similar policies and procedures for
paper submission and acceptance.
2.2
Sample and Sources
We assembled data on all papers and sessions presented in the APSA meeting
from 2009-2012. This dataset comprised 12,055 presented articles. We also collected
a random sample of 20% of papers presented in the MPSA meeting from 20092012,7 comprising 3,569 articles. Both datasets were derived from the conferences’
programs, available online. (To provide a better sense of the information conveyed in
these programs, in Figure A1 in the Appendix, we present a snapshot of two sessions
scheduled for the 2012 APSA meeting.) Our dataset includes, for each article, it’s
title, authorship, and each author’s a¢ liation. It includes the session within which
the article was due to be presented, and information on the theme, day and time of
each session.
Articles were classi…ed in one hundred and …fty-eight institution categories,
according to author(s)’a¢ liation. These categories include all Top 100 institutions
listed on the 2005 U.S. News ranking of Graduate Political Science Programs or in
the 2011 Top 100 QS World University Rankings for politics. Articles authored by
scholars a¢ liated to institutions in neither of these ‘Top 100’lists, were classi…ed in
one category.8
7
The MPSA has between 60 and 63 sessions for each day-time slot. We randomly selected
sixteen sessions in each day-time slot, and collected information on session characteristics (time
and day) and all articles and participants in each of these sessions.
8
Most of the articles - 70.1% - were single-authored. If an article had more than one author,
the author a¢ liated to the institution with the highest ranking was considered.
7
We then collected articles’ outcomes from two sources: the Social Science
Research Network (SSRN) and Google Scholar. From the Google Scholar outcomes
(citation counts recorded by September 2014) we detected only weak evidence
of negligible conference e¤ects, consistent with the view that it is too early to
investigate this outcome.9 We therefore present here a description (and subsequently
an analysis) only of the SSRN data.
SSRN is a leading website repository for academic working papers in the social
sciences, boasting over 241,000 authors and more than 1.7 million users. Authors
upload their papers without charge, and any paper an author uploads is then
downloadable for free.
SSRN is especially useful because, at the time of the conference, the papers
due to be presented are largely unpublished and SSRN tracks their visibility at
this early stage. But speci…c challenges in tracking unpublished papers remain.
Often, the titles of pre-published papers change over time. Indeed, authors’projects
often develop, evolve, divide or combine in ways that mean one cannot objectively
say whether a speci…c working paper is the same paper that was presented at a
conference or not.10
We experimented with di¤erent search criteria, and in order to increase our
chances of …nding conference articles, our …nal SSRN search was made based
on authorship and the articles’ title abbreviation. Systematic data retrieval was
commissioned from a commercial service provider: Mozenda Inc. We recorded all
cases for which the title in the paper retrieved di¤ered signi…cantly from the title in
the conference program, based on a Soundex search algorithm.
In our main analysis, we consider all articles. However, all results hold (and
in fact become stronger) when we restrict the sample only to articles with good
title matches. In addition, a research assistant conducted a manual check on 900
randomly chosen articles (a sample approximating 5% of our full dataset). From
this sample, 98.5% of the articles identi…ed on SSRN in the automated search were
9
10
These results are reported in the Appendix.
The existing literature which, in other contexts, investigates the performance of academic
papers focuses mainly on published articles (Azoulay et al, 2010; Furman and Stern, 2011; Borjas
and Doran, 2012; Waldinger, 2012).
8
considered correct. The SSRN papers identi…ed in the automated search do not
comprise a complete list, but rather (depending on the criteria used to de…ne a
match) 66-88% of papers within the two conferences’ programmes that could be
manually identi…ed on SSRN.11
Altogether, the automated search found 2,695 APSA articles and 107 MPSA
articles. This is our main sample. A main reason for …nding a larger fraction of
APSA than MPSA articles (22% versus 3%) is that the APSA encourage accepted
authors to post their articles in the SSRN APSA Annual Meeting Series, while there
is no SSRN working paper series for the MPSA meeting.
Table A1 in the Appendix shows authors’a¢ liation by the following groups: for
the entire list of APSA and MPSA articles from the programs (columns 1 and 2)
and the SSRN sample (columns 3 and 4). With respect to the universe of conference
papers, there is some over-representation of APSA articles by authors with lower
tier a¢ liations in our sample and under-representation of MPSA articles by authors
with lower tier a¢ liations in our sample. In the APSA (MPSA) meeting program,
40.6% (46.3%) of articles are authored by a scholar outside a Top 100 Institution.
As shown in column 3 (column 4), this fraction is 46.9 % (42%) in our SSRN
sample for APSA (MPSA) articles. Also to account for these di¤erences, in our
main regressions, we control for many covariates, including authors’a¢ liation …xed
e¤ects, and we replicate regressions for a subsample of APSA articles that most
resemble the sample of MPSA articles (as will be detailed in section 3.1).
We also checked for whether 2012 "treatment" articles di¤er systematically from
the "control" articles, in a way that could confounds our conference e¤ects estimates.
That would be the case, for example, if the conference cancellation in 2012 increased
the likelihood of lower quality articles being posted in SSRN. In Table A2 in the
11
Of those conference articles that could be discovered, with a high degree of con…dence, by a
manual search on SSRN (articles that match the conference paper in their authorship, date and are
a close variation of the conference title), around 12% had been missed by the automated search. Of
those conference articles that could be discovered, with a lower degree of con…dence by a manual
search on SSRN (articles that only match the conference paper in their authorship and date) 34%
had been missed by the automated search, but the majority of the papers within this number had
a substantially di¤erent title to the conference paper.
9
Appendix, we present results, conducting standard di¤-in-di¤ regressions but using
author and article pre-determined characteristics as the explained variables. We …nd
some correlations, but these are not worrying. In columns 1 and 2, we show that 2012
APSA articles in fact are more likely to be written by more experienced authors.
This characteristic is positively correlated with articles’downloads, suggesting that
our estimates for conference e¤ects are in fact underestimated.12
2.3
Outcomes and Summary Statistics
The main outcome we use is the number of articles’downloads, measured by
the number of times a paper has been delivered by the SSRN to an interested party
either electronically or as a Purchased Bound Hard Copy. At the working paper
stage, this is the most-used indicator for visibility. For example, though SSRN also
records articles’ views and citations, downloads are the primary measure used in
determining their ranking of authors and papers.13
We also gathered, for each article in our sample, the combined number of
downloads of working papers that were authored by the article’s authors and posted
on SSRN shortly after the conference: within nine months of the academic meeting.
In terms of predetermined characteristics, we collected information additionally on:
the date the article was …rst posted on SSRN, the number of articles previously
(to the conference) posted on SSRN by all the article’s authors and the date of the
earliest article posted on SSRN by any of the article’s authors.
A relevant di¤erence between the APSA and MPSA meeting is that the MPSA
meeting occurs …ve months earlier. We account for this by performing our analysis
using outcomes collected in di¤erent times. In constructing outcome variables, we
used observations collected in August 2013 and 2014 for MPSA articles, and in
January 2014 and 2015 for APSA articles. These observation dates correspond to
12
This selection also explains why the conference e¤ects estimated based on the propensity score
sample are larger than the ones considering the whole sample, as will be described in the analysis
in Table 3.
13
For the sake of space, we present in the Appendix results for conference e¤ects on SSRN views
and SSRN citations.
10
15 months (roughly one year) and 27 months (roughly two years) after the respective
2012 conferences.14
Table 1 presents summary statistics for all articles considered in the main
analysis. We trimmed 5% of the sample (or 158 observations), excluding outliers
with the largest and lowest number of downloads. On average, conference articles
have been posted in SSRN for 1,115 days (or close to 3 years).
They have
accumulated 64 downloads by 15 months of the 2012 conferences, and 74 downloads
one year later.
T able1 here
3
Results
3.1
The E¤ect of Conferences on Articles’Visibility
To quantify the e¤ect of conferences, we adopt a di¤erence-in-di¤erences
approach, considering the sample of articles in the programs of the APSA and MPSA
Annual Meetings. In the treatment group are articles that were to be presented in
the cancelled 2012 APSA meeting. We test the hypothesis that articles in the
treatment group have reduced academic visibility, compared with articles that were
scheduled to be presented in conferences that took place.
In Table 2, we present unconditional di¤erence-in-di¤erences in the average
number of downloads for APSA and MPSA for years in which both conferences
took place (2009-2011) and the year in which the APSA meeting was cancelled
(2012). Panel A shows downloads recorded 15 months after the 2012 conferences,
and Panel B, one year later (27 months after the 2012 conferences). In both panels,
it is noticeable that the di¤erence in outcomes between 2012 and previous years
is larger for APSA than for MPSA articles, suggesting a conference e¤ect. The
di¤erence-in-di¤erences for the number of downloads is -16.7 in Panel A and -20.9
in Panel B.
14
Using monthly data from Repec, we checked whether there are seasonal e¤ects on papers’
downloads, and …nd none.
11
T able2 here
Next, we present our estimates, adding controls. We estimate (1):
YiT =
+
1 (AP SA
2012)i +
2 AP SAi
+
2012
X
t [t
= 1] +
3 Xi
+
iT
(1)
t=2010
where, i indexes article and t indexes year. YiT is the outcome observed in time T ,
AP SAi is a dummy indicating whether the article is in the APSA Meeting Program,
2012
X
t [t = 1] are conference year dummies, and AP SA 2012 is an indicator for
t=2010
whether the article is in the 2012 APSA meeting program. The vector of covariates
Xi includes author and article characteristics and
iT
is a random term. We cluster
standard errors at the author a¢ liation-APSA-MPSA level. The conference impact
is revealed by the coe¢ cient
1.
Table 3, columns 1-5 show regression results when using as the dependent variable
the number of downloads recorded 15 months after the 2012 conferences. In column
1, we control for whether the article is authored by a scholar a¢ liated to a Top
10 institution and for number of authors. As proxies for authors’ experience, we
consider the aggregate number of papers posted in SSRN by all article i authors,
and the earliest year that a paper was posted in SSRN, among all authors of article
i. The estimates show that articles authored by a scholar a¢ liated to a Top 10
institution15 have an additional 10.4 downloads, in comparison to other articles.
They also indicate that articles have an extra 2.7 downloads for each additional
author. To control for timing e¤ects, in addition to the conference year dummies,
we added covariates for the number of days the article has been posted in SSRN
and its quadratic form.16
In Table 3, column 1, the di¤-in-di¤ coe¢ cient and variable of interest that
identi…es the conference e¤ect is negative (-17.92) and statistically signi…cant at the
10% level. Column 2 includes authors’a¢ liation …xed e¤ects. This set of variables
15
An institution either in the top 10 of the 2005 U.S. News ranking of Graduate Political Science
Programs or in the top 10 of the 2011 Top 100 QS World University Rankings for politics.
16
We also explore including higher order polynomials for number of days in SSRN, but they are
not statistically signi…cant and the results do not change with the inclusion of these extra variables.
12
is relevant in explaining articles’downloads: they are jointly statistically signi…cant
at the 5% level. For this speci…cation, the size of the di¤-in-di¤ coe¢ cient increases
(-21.2) and becomes statistically signi…cant at the 5% level.
To account for the fact that the MPSA sample is small and these papers di¤er
in some characteristics from the APSA papers, as shown in Table A1, we conducted
regressions restricting the sample to MPSA articles and only APSA articles that
are su¢ ciently similar the MPSA articles. To …nd this group, we estimated a
propensity score based on a logit model that controls for authors’ characteristics
and time variables as described in Table 3, column 2. In column 3, we restrict the
sample to MPSA papers and APSA articles whose propensity score are in the 95
percentile and we run the regressions using the same controls used in column 2. The
size of the di¤-in-di¤ coe¢ cient increases to -26.4 and the estimated e¤ect remains
statistically signi…cant at the 5% level.
In columns 4 and 5, we control for di¤erential time trends for APSA and
MPSA articles. It is conceivable that articles di¤er in the time pro…le of their
downloads. Since the conference cancellation a¤ected newer (2012 articles) rather
than older articles and outcomes are observed in the short term (e.g. 2014) if,
for example, MPSA articles accumulate downloads earlier than APSA articles, this
would generate a positive bias, on the e¤ect estimated by the di¤-in-di¤ coe¢ cient.
We therefore include, in the regressions, time interactions for APSA articles-days
in SSRN and APSA articles-days in SSRN squared, and we replace year dummies
by a year time trend. In column 4, we present results for the whole sample. The
size of the estimated impact of conferences decreases to 13.9, and the di¤-in-di¤
coe¢ cient remains statistically signi…cant. In column 5, we repeat the speci…cation
and restrict to the propensity score sample. The coe¢ cient of interest is -22.4
and statistically signi…cant at the 1% level. In summary, the results in columns 1-5
indicate that, on average, articles in the 2012 APSA conference would have bene…ted
from, approximately, an extra 14-26 downloads in the 15 months after the conference
if Hurricane Isaac had not occurred.
In the remaining columns in Table 3, we present results for downloads recorded
one year later, 27 months after the 2012 conferences. We replicate the most complete
13
speci…cation in columns 4 and 5, using the whole sample (column 6) and the
propensity score sample (column 7). Since we are controlling for the days the article
has been posted in SSRN, the di¤erence in the estimated di¤-in-di¤ coe¢ cients for
outcomes recorded at di¤erent times (columns 4 and 6, and columns 5 and 7) should
re‡ect a change in the conference e¤ect. The di¤erences in the estimated coe¢ cients
are visible. They re‡ect the increase in the size of the e¤ect from (roughly) one year
to two years after the cancelled conference. Focusing on the most conservative
estimate for the conference e¤ect, using the whole sample and the most complete
speci…cation, our …ndings indicate that, on average, an article in the 2012 APSA
conference would have gained an additional 14 downloads one year after that meeting
and 16.5 downloads two years after the meeting, had the conference taken place.
T able3 here
3.2
The E¤ects of the Session Audience on Articles’
Visibility
Next, we investigate a speci…c channel determining conference e¤ects, as a
means to corroborate the existence of such e¤ects. We test for whether there are
gains that come from session attendance. In this analysis, instead of looking for a
di¤erent conference for comparison to the APSA Meeting (the MPSA meeting being
the closest one), we focus our investigation within the sample of APSA articles, but
explore heterogeneity in the size of session audience. We conjecture that articles that
would have had a higher audience were more hindered by the 2012 APSA meeting
cancellation. We conduct di¤erence-in-di¤erences regressions to test the hypothesis
that the number of downloads is lower for articles with higher (expected) audience,
in the cancelled 2012 conference, than in previous editions.
Before presenting results, we explain our measure for ‘Expected Audience’. In
creating this variable, we followed the intuition that attendees/authors tend to sort
into attending sessions related to their own research interest. Expected Audiencei
is a function of the total number of articles in the same theme as article i across
14
the meeting in which i was presented (Ti ), the number of articles to be presented
in the same time slot and theme as article i but in a di¤erent session (Ni ), and the
number of co-synchronous sessions on the same theme as article i (Si ). (The crude
intuition here is that the audience in a given session will be drawn from the pool
of other authors whose papers at the conference are on the theme of the session,
excluding the article’s own author, divided equally across the simultaneous sessions
on this theme.)
Expected Audiencei
Ni
Oi
Si
1
(2)
In constructing this variable, we used the APSA Meeting classi…cation of articles
(and sessions) in 132 session themes (eg.
Public Opinion, Normative Political
Theory, Political Psychology, Legislative Studies, Canadian Politics).17 Between
2009-2012, each theme gathered 33.44 articles per year, on average. (Note that
there are highly-populated themes as Comparative Politics, Foundations of Political
Theory and International Political Economy, that have more than 100 presenting
articles per year.)
The average of articles’ Expected Audience is 53.
In the
Appendix, we show the histograms of Expected Audiencei per conference year.
17
They include 52 main theme panels (that contain 90% of the articles) and 70 remaining
themes that vary per year. The main theme sections are Political Thought and Philosophy,
Foundations of Political Theory, Normative Political Theory, Formal Political Theory, Political
Psychology, Political Economy, Politics and History, Political Methodology, Teaching and Learning,
Political Science Education, Comparative Politics, Comparative Politics of Developing Countries,
The Politics of Communist and Former Communist Countries, Advanced Industrial Societies,
European Politics and Society, International Political Economy, International Collaboration,
International Security, International Security and Arms Control, Foreign Policy, Con‡ict Processes,
Legislative Studies, Presidency Research, Public Administration, Public Policy, Law and Courts,
Constitutional Law and Jurisprudence, Federalism and Intergovernmental Relations, State Politics
and Policy, Urban Politics, Women and Politics Research, Race, Ethnicity, and Politics,
Religion and Politics, Representation and Electoral Systems, Political Organizations and Parties,
Elections and Voting Behavior, Public Opinion, Political Communication, Science, Technology,
and Environmental Politics, Information Technology and Politics, Politics, Literature, and Film,
New Political Science, International History and Politics, Comparative Democratization, Human
Rights, Qualitative and Multi-method Research, Sexuality and Politics, Health Politics and Policy,
Canadian Politics, Political Networks, Experimental Research.
15
In Figure 1, we show the relationship between future downloads and articles’
Expected Audience for the 2009-2011 editions (in which the conference took place).
In Figure 2, we illustrate this relationship for the sample of articles in the 2012
program, when the conference was cancelled. Each dot indicates an article-outcome
year. To ease visualization, we plot a linear regression line in both …gures. While a
positive relationship is visible in Figure 1; almost none is observed in Figure 2. The
slope of the line in Figure 1 is 0.113 and it is statistically signi…cant at the 1% level,
while the slope in Figure 2 is 0.026, with a respective p-value of 47%. Figure 1, as
opposed to Figure 2, shows that articles’(future) downloads are increasing in the
Expected Audience measure.
F igures 1 and 2 here
This relationship suggests that attendees will be downloading articles they see
during conference sessions.18 This mechanism in turn implies a conference impact.
We investigate this further in a regression framework, in which we estimate (3),
using as dependent variable the number of downloads recorded 15 months after the
cancelled conference.
YiT = + 1 ExpectAudiencei 2012+ 2 ExpectAudiencei +
2012
X
t [t
= 1]+ 3 Xi +
iT
t=2010
(3)
The impact of the conference is identi…ed from the interaction from the variable
Expected Audience with a dummy for the 2012 cancelled conference, and the
coe¢ cient of interest is
1.
It reveals the change of the relationship between expected
session audience and future downloads for articles in the cancelled vs occurring
18
Alternatively, attendees can download articles, they consider relevant by …nding them at the
APSA Meeting Program when at the conference. The gain in articles’visibility due to conference
might be explained because the commitment to attend the academic meeting puts scholars in the
state of mind of learning about the research of participants, even if they are skipping sessions.
We perform tests with slightly modi…ed variable (Modi…ed Expected Audiencei = Ni - 1) and …nd
same qualitative results, as the ones in Figures 1 and 2, and practically the same …ndings that we
will report in Tables 4-6.
16
conference. Hence it re‡ects the forgone downloads from the cancelled meeting.
One should note an important point of di¤erence between this analysis and the
one in the last section. In the present analysis we only quantify one part of the
conference e¤ect: the visibility gained via session participants. There are other
possible conference gains, not quanti…ed by coe¢ cient
1.
(For example, articles may
have experienced improvements due to advice from discussants or chairs, leading to
an increase in articles’visibility.)
Returning to the results, in Table 4, we present results when clustering errors
at the theme level. In column 1, we begin with the speci…cation controlling for a
polynomial for the number of days the article has been posted in SSRN. Consistent
with Figures 1 and 2, the coe¢ cient
1,
is negative (-0.116) and statistically
signi…cant at the 5% level. In column 2, we added 131 theme …xed e¤ects (that
are jointly statistically signi…cant at the 1% level), and the di¤-in-di¤ coe¢ cient
remains practically the same (-0.113). In column 3, we include author a¢ liation …xed
e¤ects. The di¤-in-di¤ coe¢ cient is statistically signi…cant at 5% and its magnitude,
again, does not change (-0.119). The robustness of the di¤-in-di¤ coe¢ cient size to
di¤erent sets of controls re‡ects the situation of random assignment of articles to
the conditions of cancelled vs occurring conferences, determined by the hurricane in
2012.
For the speci…cation in column 2, the Expected Audience coe¢ cient is identi…ed
based on variation in the number of articles within-theme over years, and the number
of same-theme sessions occurring simultaneously, per conference. There is a concern
that the Expected Audience variable is endogenous, correlated with unobservables
related to articles’ quality or impact potential.
These might be observed by
conference organizers, internalised by the allocation of articles to sessions in the
program, and captured by the Expected Audience variable.
For example, the
organizers might allow cosynchronicity of sessions comprising weaker articles within
a given theme to a greater extent than of those comprising the most promising
articles. (In this case, the di¤-in-di¤ coe¢ cient still captures a causal e¤ect, but it
is the return of articles’quality from presenting in a conference.) In column 4, we
add to the covariates in column 3, 16 dummies for the session time-day slot that the
17
article has been allocated, as the time-day allocation might correlate with articles’
perceived quality. These indicators are not jointly statistically signi…cant: the pvalue for an F-test is 32%. The di¤-in-di¤ coe¢ cient remains statistically signi…cant
and the size is 0.116.
It is also possible that the variable Expected Audience is in fact capturing
variation in number of submissions by theme, correlated with fashions in the
profession and articles’ prospective downloads. To account for this, in column
5, we present results for the speci…cation in column 3 (controlling for theme and
a¢ liation …xed e¤ects), and include session themes speci…c year-trends. These last
controls are meant to account for possible di¤erent time trends across articles from
di¤erent themes. The size of the di¤-in-di¤ coe¢ cient decays to 0.0794, but is
still negative and statistically signi…cant at the 5% level. In column 6, we account
for a possible di¤erential dynamic of downloads accumulation across articles with
di¤erent expected audience (it might be that more general interest articles with
higher expected audience di¤er from niche type of articles). We include to the
regression interactions for expected audience-days in SSRN and expected audiencesquared of days in SSRN, and replace year dummy variables by a linear year trend.
The di¤-in-di¤ coe¢ cient increases in size signi…cantly to -0.19, and it remains
statistically signi…cant.
Overall, estimates for
1
indicate that for each 6-13 articles in the same theme
and conference year, there is an increase of one download for article i. Considering
the distribution of the Expected Audience variable, on average, an article gains
between 4 to 10 downloads, from the session audience in the APSA conference, in
the 15 months following the meeting.
T able4 here
In Table 5, we present results from using other proxies of session attendance.
First, we tested for whether there is a di¤erential e¤ect for a paper that is facing
direct competition for session audience to an article written by a famous author.
We coded whether an article is allocated to a same theme and time slot (that
have roughly the same group of interested participants), but to a di¤erent session,
18
to a paper written by someone well-known in the …eld. We named this variable
CompeteFamousAuthor.
We classi…ed as a famous author someone that is in
the editorial board of the American Political Science Review or at the American
Journal of Political Science (the top journals in the …eld [(McLean, Blais, Garand
and Giles 2009)] in the respective conference year.19 In this case, we conjecture
that a reasonable part of the prospective audience of article i will migrate to
the session of the famous author. We replace Expected Audience by the variable
CompeteFamousAuthor in the regressions. In column 1, we focus on the speci…cation
that includes session time-day …xed e¤ects to control for the possibly endogenous
allocation in the Program, a polynomial for days in SSRN, year- and theme-…xed
e¤ects. In column 2, we repeat this speci…cation, and include author a¢ litation …xed
e¤ects. In the same spirit as the previous test, we check whether, in comparison to
previous APSA editions, articles allocated to sessions (that are likely to be) poorly
attended were less handicapped by the conference cancellation than other articles
that would not face competition with the editorial board author. The di¤-in-di¤
coe¢ cient is statistically signi…cant at the 10% level in column 1 (and at 13% level
in column 2), and the size of this e¤ect is close to 14 downloads.
Another source of heterogeneity for articles’ visibility within the conference
relates to the allocated session time slot. Sessions occurring in the …rst slot are often
perceived to be poorly attended: in the APSA meeting, these occur on Thursday
at 8am, when conference participants are still arriving and registering. Our test
consists in examining whether articles allocated to the slot of Thursday 8am in the
cancelled 2012 APSA meeting have higher downloads (relative to articles allocated
to other slots) than articles allocated to the …rst session in the APSA meetings of
2009-11. We test for a "conference …rst session" e¤ect using the same approach as in
the previous test. In Table 5, column 3, we present the results replicating controls
in column 1. They are marginally supportive of our hypothesis. The coe¢ cient
19
This classi…cation is obviously very simplistic, but easily traceable. Alternative measures for
“stars”in the profession are based on their citations, grants and awards (Azoulay et al, 2010), that
is di¢ cult information to recover by conference year. In the data, the group of editorial board
scholars, author approximately 2% of articles in panel sessions per year. Approximately 5% of
other articles faced competition with an editorial board paper.
19
for the article allocation in the …rst section in 2012 is positive and statistically
signi…cant at 11%. In column 4, we report results also including author a¢ liation
…xed e¤ects. The magnitude of the coe¢ cient remains the same, but the coe¢ cient
is only statistically signi…cant at the 13% level. Interesting, the magnitude of the
…rst session e¤ect is, again, close to -14 downloads.
20
T able5 here
3.3
Conference E¤ects by Authorship
For various reasons, one may expect some heterogeneity, by authorship, of
conference e¤ects.
Conferences gather a group of unpublished articles.
In its
absence, any article has an ex-ante expected readership, based (at least in part)
on its authors’ characteristics: their institutional a¢ liation, the existing visibility
of their previous papers, etc. In this section, we investigate whether there are
di¤erential conference e¤ects by such characteristics. Articles with authors whose
characteristics lead to a high ex ante expected readership may bene…t more from
the conference due to unbalanced sorting of attendees into their presenting sessions.
But, on the other hand, for these articles there may be less to gain: academics
interested in the topic would have become aware of the articles anyway. Indeed, it is
conceivable that the conference may lead such articles to lose readers, as interested
academics become aware of other work by less established authors. (The analogous
reasoning can be applied to articles with a lower ex ante expected readership. These
articles may have a smaller audience in the conference, but this audience may include
a greater number who, though interested in the topic, would not have encountered
the article otherwise.) The net e¤ect of these forces will determine the size and sign
of the conference e¤ect. In our analysis, we use two proxies for this author-based
ex ante expected readership: (i) authors’institutional a¢ liation, and (ii) whether
the authors have a previous paper posted in SSRN. Kim et al (2009) and Oyer
20
We replicate this test for other commonly perceived weakly attended day-times: the last session
of the conference (Sunday, 10:15am) or the morning after the opening reception and we do not …nd
a statistically signi…cant e¤ect (these results are not shown in the paper).
20
(2006) show that scholars a¢ liated to higher tier institutions are more cited and
have a higher chance of publishing in top journals. Therefore, it is reasonable to
assume that on average their expected readership is higher. Likewise, more senior
and well-known authors are more likely to have previous articles posted in SSRN.
In Table 6 we look for heterogeneous conference session e¤ects, based on these
two characteristics. Panel A presents results using data recorded 15 months after the
cancelled APSA meeting, and Panel B using data recorded one year later: 27 months
after the cancelled meeting. In column 1 we report results considering the entire
APSA sample, using controls from days in SSRN, days in SSRN squared, year-,
theme- and a¢ liation …xed e¤ects. We separate the e¤ect for articles authored
by scholars that have a previous article posted in SSRN and those that do not, by
interacting the expected audience for 2012 APSA articles, by this statuses. The di¤in-di¤ coe¢ cient is only statistically signi…cant for articles authored by newcomers
in SSRN, indicating a conference e¤ect for this group. Columns 2 and 3 present
coe¢ cients estimated from separate regression by SSRN status, in which we …nd
the same qualitative results. In the remaining columns in Table 6, we present
the conference e¤ect by tier a¢ liation. In column 4, we interacted the Expected
Audience for 2012 articles with three categories for authors’ a¢ liation: (i) Top
10 institutions, (ii) Top 100, but outside the Top 10, (iii) institutions outside the
Top 100. The di¤-in-di¤ coe¢ cient is negative and statistically signi…cant, at (at
least) the 11% level, only for articles authored by scholars a¢ liated to institutions
outside the Top 10. In columns 5-7, we conducted separate regressions by institution
category and …nd similar results.
The e¤ect is especially noticeable, in terms
of signi…cance, at the 5% level, among papers authored by scholars a¢ liated to
institutions outside the Top 100. Consistent with the results in section 3.1, these
conference session e¤ects seem to be increasing over time. The e¤ects documented
in Panel B, using data recorded one year later than in Panel A, show the same
patterns, but are larger in magnitude.
T able6 here
Next, we examine whether the conference has other e¤ects beyond visibility of
21
the articles presented: on visibility of authors’other work. Using information from
authors’SSRN pro…le, we gathered information on the number of downloads from
other articles posted in SSRN in the nine months following the conference. For
example, for an article i presented in the 2010 APSA Meeting (September 2010),
this variable is the combined number of SSRN downloads of all articles posted by
all authors of article i, excluding article i, from September 2010 to April 2011.
Table 7 present results, using this measure as the dependent variable, replicating
the speci…cations and data decomposition in Table 6. Both in Panel A and B, the
di¤-in-di¤ coe¢ cient is only statistically signi…cant at the 10% level, for the group of
articles authored by scholars outside the Top 100 (column 7). This suggests that, for
these scholars, the APSA Meeting increases the visibility not only of the presented
article but also of their wider portfolio.
T able7 here
4
Conclusion
We have provided, in this paper, estimates for the e¤ects of conferences, derived
by exploiting a natural experiment. To the best of our knowledge, this is a wholly
novel contribution, in the sense that no previous analysis has applied a compelling
identi…cation strategy this issue. And the issue itself is of considerable importance,
because signi…cant resources across all research …elds in academia are apportioned
to organising and attending such events.
Using articles accepted in a comparator conference as a baseline group for
articles in the American Political Science Association Annual Meeting, our di¤-indi¤ …ndings suggest a conference e¤ect of around 17-30 downloads in the 27 months
(approximately two years) following the 2012 conferences. It is worth noting that,
as a remedy for the 2012 cancellation, the APSA sent a hard copy of the program to
all participants, and of course the program was made available online. It is therefore
possible that authors (notwithstanding the cancellation) gained some visibility. To
the extent of this possibility, our estimates may be viewed as a lower bound for the
22
conference e¤ect.
Our results suggest that early visibility leads to an advantage that continues to
accumulate. It is notable that the initial conference e¤ect (observed 15 months after
the cancelled conference) continues to increase in the following 12 months. This
…nding recollects those of Oyer (2006) and Salganik et al (2006), in the sense that
“initial conditions”matter. On the other hand, conference e¤ects somewhat mitigate
the dynamic observed in academia of a “stronger-get-stronger”e¤ect. For example,
Oyer (2006) shows that initial career placement causes publication in the long run,
favouring scholars that get employed in higher-rank institutions. Our …ndings show
that it is authors from institutions outside the Top 10, and early career authors,
that gain visibility via session attendance. Our results also …nd resonance with
the literature that examines the consequences of decreasing communication costs
among academics (Agrawal and Goldfarb, 2008; Kim et al, 2009; Ding et al 2010).
This literature has focused on the internet as a facilitator of direct interactions, but
conferences can be viewed as another and the locus of principal bene…t (for example,
Agrawal and Goldfarb (2008) …nd that the introduction of the early internet –Bitnet
- mainly bene…ted the publication prospects of middle-tier universities) seems to be
similar.
In this article, our main focus has been on the visibility gain for the work that
is presented. This encompasses both any direct gain, through an advertising e¤ect,
and any indirect gain, achieved if the conference leads to improvements in the work
itself that in turn increase its eventual readership. We do not, in this present work,
consider other conference bene…ts: network formation, idea formation and so forth.
These are avenues for future work.
References
[1] Agrawal, A. and A. Goldfarb. 2008. “Restructuring Research: Communication
Costs and the Democratization of University Innovation.” American
Economic Review, 98(4): 1578–1590.
[2] Azoulay, P., J. Gra¤ Zivin and J. Wang. 2010. “Superstar Extinction,” The
23
Quarterly Journal of Economics, 125 (2): 549-589.
[3] Blau, F.D., J.M. Currie, R.T.A. Croson and D.K. Ginther. 2010. “Can
Mentoring Help Female Assistant Professors?
Interim Results from a
Randomized Trial.”American Economic Review 100(2): 348-52.
[4] Borjas, G and K B. Doran. 2012. “The Collapse of the Soviet Union and
the Productivity of American Mathematicians”, The Quarterly Journal of
Economics 127(3): 1143-1203.
[5] Ding, W., S. Levin, P. Stephan and A. Winkler. 2010. “The Impact
of Information Technology on Scientists’ Productivity, Quality and
Collaboration Patterns.”Management Science 56(9): 1439–1461.
[6] Furman, J. L. and S. Stern. 2011. "Climbing atop the Shoulders of Giants:
The Impact of Institutions on Cumulative Research." American Economic
Review 101(5): 1933-1963.
[7] Galang MT, Yuan JC, Lee DJ, Barao VA, Shyamsunder N, Sukotjo C. 2011.
“Factors in‡uencing publication rates of abstracts presented at the ADEA
annual session & exhibition.”Journal of Dental Education 75(4): 549-556.
[8] Ioannidis J.P.A. 2012. “Are medical conferences useful?
And for whom?”
Journal of the American Medical Association 307:1257-1258.
[9] Kim, E.H., A. Morse, L. Zingales. 2009. “Are elite universities losing their
competitive edge?”Journal of Financial Economics, 93(3): 353-381.
[10] Lee DJ, Yuan JC, Prasad S, Barão VA, Shyamsunder N, Sukotjo C. 2012.
“Analysis of abstracts presented at the prosthodontic research section of
IADR General Sessions 2004-2005: demographics, publication rates, and
factors contributing to publication.” Journal of Prosthodontics 21(3): 22531.
[11] McLean, I., Blais, A., Garand, J. and Giles, M. 2009. “Comparative journal
rankings: A survey report.“ Political Studies Review 7:18-38.
[12] Merton, R. 1968. “The Matthew E¤ect in Science:
The reward and
communication systems of science are considered,’Science 159(3810): 56-63.
24
[13] Oettl, A. 2012. “Reconceptualizing Stars: Scientist Helpfulness and Peer
Performance.”Management Science. 58(6): 1122–1140.
[14] Oyer, P., 2006. “Initial Labor Market Conditions and Long-term Outcomes for
Economics.”Journal of Economic Perspectives 20 (3): 143–160.
[15] Salganik, M J., P Dodds, and J. Watts. 2006. "Experimental Study of Inequality
and Unpredictability in an Arti…cial Cultural Market." Science 311: 854-56.
[16] Toma M., McAlister F.A., Bialy L., Adams D., Vandermeer B and P.W.
Armstrong. 2006. “Transition From Meeting Abstract to Full-length Journal
Article for Randomized Controlled Trials.”Journal of the American Medical
Association 295(11): 1281-1287.
[17] Waldinger, F. 2012. “Peer e¤ects in science - Evidence from the dismissal of
scientists in Nazi Germany.”Review of Economic Studies 79(2): 838-861.
[18] Winnik S, Raptis DA, Walker JH, Hasun M, Speer T, Clavien P-A, Komajda
M, Bax JJ, Tendera M, Fox K, Van de Werf F, Mundow C, Lüscher
TF, Ruschitzka F and Matter CM. 2012. “From abstract to impact
in cardiovascular research: factors predicting publication and citation.”
European Heart Journal 33(24): 3034-3045.
25
Notes: Each circle is the outcome of an article in time T. The lines are predicted values from a linear
regression of downloads on expected audience and a constant.
Table 1 - Summary Statistics
Number of article downloads (by 2014)
Number of article downloads (by 2015)
Number of downloads from other articles posted in SSRN,
by author of article i, within nine months of the conference
Article Characteristics
Author is from a Top 10 institution
Author is from a ]Top 10, Top 100] Institution
Author is from a Institution below Top 100
Number of authors
Number of articles previously posted in SSRN by
article' author(s)
Conference year minus Earliest year an article was
posted in SSRN by any of the article authors
Number of days article i in SSRN
APSA article
2012
2011
2010
2009
Mean
64.03
73.79
16.52
Stand Deviation Minimum Maximum Observations
47.34
6
321
2,783
57.35
8
529
2,756
128.09
0
5,387
2,783
0.10
0.43
0.47
1.37
0.30
0.50
0.50
0.66
0
0
0
1
1
1
1
4
2,783
2,783
2,783
2,783
1.71
5.60
0
174
2,783
1.41
1,115
0.97
0.20
0.27
0.26
0.27
2.48
426
0.18
0.40
0.44
0.44
0.44
0
0
0
0
0
0
0
15
4,786
1
1
1
1
1
2,783
2,783
2,783
2,783
2,783
2,783
2,783
Notes: The number of observations for downloads by 2014 is larger than by 2015 because some articles were removed from SSRN in the interim.
Outcomes by 2014 refers to variables recorded in January 2014 for APSA papers, and in August 2013 for MPSA papers. Outcomes by 2015 refers to
variables recorded in January 2015 for APSA papers, and in August 2014 for MPSA papers.
Table 2: Averages
Conference editions
Before 2012
2012
Difference
Panel A: Downloads by 2014
APSA Articles
MPSA Articles
68.6
59.7
46.4
54.2
-22.2
-5.5
-16.7
Panel B: Downloads by 2015
APSA Articles
MPSA Articles
77.5
71.3
58.9
72.9
-18.6
1.6
-20.2
Notes: The period before 2012 corresponds to 2009-2011. The total sample size is 2,783 in Panel A
and 2,756 in Panel B.
Table 3: Effects of Conferences on Articles' Downloads
Dependent Variable:
[1]
APSA x 2012
Author is from a Top 10 Institution
Number of Authors
Number of articles previously posted in SSRN
Conference year minus Earliest year an article was
posted in SSRN by any of the article authors
APSA
2012
2011
2010
Number of Days in SSRN
(Number of Days in SSRN)^2
Number of Downloads - one year after
[2]
[3]
[4]
[6]
two years after
[7]
-17.920
[10.196]*
10.377
[3.885]**
2.719
[1.515]*
1.428
[3.091]
1.311
[0.581]**
7.101
[5.073]
7.748
[11.737]
2.561
[3.542]
-2.911
[2.295]
0.027
[0.009]**
0.000
[0.000]
-21.199
[10.294]**
-26.403
[11.756]**
-13.924
[2.796]**
-22.407
[8.693]**
-16.583
[3.956]**
-30.093
[12.769]**
2.814
[1.344]**
1.448
[3.383]
1.160
[0.590]**
5.147
[4.818]
13.617
[11.908]
4.257
[3.584]
-1.708
[2.230]
0.030
[0.009]**
0.000
[0.000]
1.329
[2.341]
0.999
[5.683]
2.105
[1.345]
10.207
[7.810]
8.990
[14.518]
3.989
[6.614]
0.091
[6.175]
0.005
[0.017]
0.000
[0.000]
3.000
[1.338]**
1.212
[3.346]
1.137
[0.599]*
-3.703
[13.699]
1.449
[2.310]
0.506
[5.481]
2.131
[1.355]
-7.017
[17.115]
3.900
[1.720]**
0.411
[3.690]
1.187
[0.692]*
-13.099
[31.635]
2.719
[3.185]
-2.662
[6.857]
3.187
[1.756]*
-37.964
[42.773]
0.023
[0.014]*
0.000
[0.000]**
2.506
[1.781]
-0.001
[0.014]
0.000
[0.000]**
-0.027
[0.023]
0.000
[0.000]
2.295
[3.270]
0.035
[0.031]
0.000
[0.000]
-0.005
[0.028]
0.000
[0.000]
0.747
[2.368]
0.001
[0.025]
0.000
[0.000]
-0.100
[0.041]**
0.000
[0.000]**
0.049
[4.550]
0.066
[0.049]
0.000
[0.000]
All
no
0.062
2,783
All
yes
0.109
2,783
Propensity Score
yes
0.216
824
All
yes
0.109
2,783
Propensity Score
yes
0.216
824
All
yes
0.079
2,756
Propensity Score
yes
0.207
816
Year trend
APSA X Number of Days in SSRN
APSA X (Number of Days in SSRN)^2
Sample
Author affiliation fixed effects (N=158)
R-squared
N
[5]
Notes: Robust standard errors clustered at the author affiliation-APSA-MPSA level are in brackets. Downloads in columns 1-4 were recorded in January 2014 for APSA papers, and in August 2013 for
MPSA papers. Downloads in columns 6 and 7 were recorded in January 2015 for APSA papers, and in August 2014 for MPSA papers.
** Significant at the 5% level, * Significant at the 10% level
Table 4: Effects of Conferences on Number of Articles' Downloads
[1]
[2]
[3]
[4]
[5]
[6]
APSA 2012 x Expected Audience
-0.116
[0.048]**
-0.113
[0.051]**
-0.119
[0.055]**
-0.116
[0.056]**
-0.079
[0.040]**
-0.190
[0.044]**
Expected Audience
0.114
[0.036]**
0.055
[0.036]
0.068
[0.0423]
0.072
[0.043]*
0.068
[0.046]
0.133
[0.141]
yes
yes
no
no
no
no
no
no
no
yes
yes
yes
no
no
no
no
no
no
yes
yes
yes
yes
no
no
no
no
no
yes
yes
yes
yes
yes
no
no
no
no
yes
yes
yes
yes
no
yes
no
no
no
yes
no
yes
yes
yes
no
yes
yes
yes
0.056
2,688
0.128
2,688
0.177
2,684
0.182
2,684
0.210
2,684
0.177
2,684
Controls
Days in SSRN and (days in SSRN)^2
Conference year dummies
Session theme fixed-effects (N=131)
Author affiliation fixed effects (N=158)
Session time-day slot fixed-effects (N=12)
Session theme - year trend
Year trend
Expected audience X days in SSRN
Expected audience X (days in SSRN)^2
R-squared
N
Notes: The sample includes only APSA articles. The dependent variable is the number of downloads by January 2014. The variable expected audience is explained in the text.
Robust standard errors clustered at the theme level are in brackets.
** Significant at the 5% level, * Significant at the 10% level
Table 5: Effects of Session Attendance on Articles' Downloads
APSA 2012 X CompeteFamousAuthor
CompeteFamousAuthor
[1]
[2]
13.629
[8.599]
-3.349
[4.167]
14.045
[8.125]*
-4.742
[2.779]*
APSA 2012 X First Session
First Session (Thursday 8AM)
Controls
R-squared
N
yes
0.132
2,688
yes
0.181
2,684
[3]
[4]
13.388
[8.396]
-1.754
[4.779]
13.852
[8.972]
0.638
[4.777]
yes
0.132
2,688
yes
0.181
2,684
Notes: Same notes as Table 4.
All regressions include covariates for conference year fixed effects, number of days in SSRN, the square of the number of
days in SSRN, theme-fixed effects and session day-time fixed effects. Regressions in columns 2 and 4 also include author
affiliation fixed effects.
Table 6: Effects of Conferences on Articles' Downloads by Authorship
Author affiliation
Author has a previous
paper in SSRN
Sample :
Panel A: outcome downloads by Jan 2014
2012 Expected Audience x Have a previous paper in SSRN
2012 Expected Audience x Does not have a previous paper in SSRN
All
No
Yes
All
Top 10
[1]
[2]
[3]
[4]
[5]
-0.172
[0.090]*
-0.100
[0.069]
2012 Expected Audience x Top 10
2012 Expected Audience x ]Top 10 to Top 100]
2012 Expected Audience x below Top 100
Panel B: outcome downloads by jan 2015
2012 Expected Audience x Have a previous paper in SSRN
2012 Expected Audience x Does not have a previous paper in SSRN
0.181
2,684
[7]
0.198
1,485
0.272
1,199
-0.228
[0.117]**
-0.100
[0.098]
-0.044
[0.111]
-0.100
[0.062]
-0.117
[0.051]**
0.139
2,688
0.270
[0.217]
-0.151
[0.091]*
-0.189
[0.065]**
0.307
256
0.157
1,164
0.182
1,264
0.413
[0.373]
-0.217
[0.116]*
-0.229
[0.076]**
0.279
252
0.134
1,153
0.163
1,254
-0.086
[0.078]
-0.242
[0.081]**
2012 Expected Audience
2012 Expected Audience x Top 10
2012 Expected Audience x ]Top 10 to Top 100]
2012 Expected Audience x below Top 100
R-squared
N
below Top 100
-0.080
[0.051]
-0.191
[0.063]**
2012 Expected Audience
R-squared
N
below Top 10 and
within Top 100
[6]
0.152
2,659
0.166
1,475
0.252
1,184
0.063
[0.150]
-0.141
[0.075]*
-0.146
[0.073]**
0.113
2,663
Notes: The sample includes only APSA articles. The variable expected audience is explained in the text. In Panel A, the dependent variable is the number of downloads by January 2014. In Panel B, the dependent variable is the
number of downloads by January 2015. All regressions include covariates for conference year fixed effects, expected audience, number of days in SSRN, the square of the number of days in SSRN and theme-fixed effects.
Regressions in columns 1-3 also include indicators for affiliation fixed effects. Regressions in columns 1 and 4-7 also include an indicator for whether some of the author(s) have a previous paper in SSRN. Robust standard errors
clustered at the theme level are in brackets.
** Significant at the 5% level, * Significant at the 10% level
Table 7 - Impacts by Authorship: Effects of Conferences on Authors' other SRNN articles downloads
Author affiliation
Author has a previous
paper in SSRN
Sample :
Panel A: outcome downloads by Jan 2014
2012 Expected Audience x Have a previous paper in SSRN
2012 Expected Audience x Does not have a previous paper in SSRN
All
No
Yes
All
Top 10
[1]
[2]
[3]
[4]
[5]
-0.104
[0.109]
-0.069
[0.153]
2012 Expected Audience x Top 10
2012 Expected Audience x ]Top 10 to Top 100]
2012 Expected Audience x below Top 100
Panel B: outcome downloads by jan 2015
2012 Expected Audience x Have a previous paper in SSRN
2012 Expected Audience x Does not have a previous paper in SSRN
0.162
2,685
[7]
0.214
1,485
0.302
1,200
-0.050
[0.118]
-0.085
[0.200]
0.198
[0.165]
-0.073
[0.072]
-0.006
[0.049]
0.117
2,689
0.539
[0.513]
-0.105
[0.100]
-0.079
[0.046]*
0.210
256
0.187
1,165
0.098
1,264
0.860
[0.666]
-0.156
[0.152]
-0.091
[0.052]*
0.197
256
0.164
1,165
0.102
1,264
0.048
[0.116]
-0.112
[0.115]
2012 Expected Audience
2012 Expected Audience x Top 10
2012 Expected Audience x ]Top 10 to Top 100]
2012 Expected Audience x below Top 100
R-squared
N
below Top 100
0.011
[0.091]
-0.066
[0.083]
2012 Expected Audience
R-squared
N
below Top 10 and
within Top 100
[6]
0.149
2,685
0.219
1,485
0.268
1,200
0.276
[0.224]
-0.068
[0.098]
-0.006
[0.070]
0.101
2,689
Notes: Same notes as Table 5.
In Panel A the dependent variable is the number of downloads, by January 2014, of other articles posted in SSRN, by the author of article i, within nine months of the conference. In Panel B the dependent variable is the number of
downloads by January 2015.
APPENDIX
Table A1 - Descriptives: Articles' Characteristics and Outcomes
Sample:
All
in SSRN
APSA
[1]
MPSA
[2]
APSA
[3]
MPSA
[4]
12.94
[33.57]
8.48
[27.87]
9.59
[29.46]
15.88
[36.72]
Author is from a ]Top 10, Top 100] Institution (in %)
46.17
[49.85]
45.22
[49.77]
43.35
[49.56]
42.05
[49.59]
Author is from a Institution below Top 100 (in %)
40.6
[49.11]
46.28
[49.86]
46.91
[49.91]
42.05
[49.59]
Number of Authors
1.362
[0.645]
1.431
[0.724]
1.368
[0.658]
1.485
[0.811]
Number of Days in SSRN
1122
[414.5]
1029
[594.34]
Number of previous articles posted in SSRN
1.709
[5.57]
2,695
1.752
[6.48]
107
Characteristics
Author is from a Top 10 Institution (in %)
N
12,055
3,569
Table A2 - Diff-in-Diff Coefficient on Pre-determined Variables
Sample
Diff-in-diff Coefficient
Dependent variable
Author affiliated to Top 10
APSA + MPSA articles
APSA * 2012
[1]
[2]
APSA articles
APSA*2012* expected audience
[3]
[4]
-0.1307
[0.0932]
0.0001
[0.000387]
Author affiliated between Top 11-50
0.1099
[0.0795]
-0.0005
[0.00051]
Author affiliated between Top 51-100
-0.0393
[0.0941]
-0.0010
[0.0004]**
Author affiliated bellow Top 100
0.0601
[0.1122]
0.0014
[0.0006]**
Number of Authors
0.1238
[0.1662]
0.1049
[0.17022]
-0.0020
[0.0007]**
-0.0017
[0.0007]**
3.7932
[0.9433]**
4.3436
[1.1788]**
-0.0028
[0.0044]
-0.0041
[0.0048]
-29.6108
[88.234]
-39.4053
[87.9130]
-0.3640
[0.3266]
-0.4790
[0.3449]
yes
no
2,781
yes
yes
2,781
yes
no
2,686
yes
yes
2,686
Number of previous articles posted in SSRN
Number of days the conference article has been in SSRN
Controls
APSA dummy and year fixed effects
author affiliation fixed effects
N
** Significant at the 5% level, * Significant at the 10% level
Table A3: Effects of Conferences on the number of scholar google citation by 2014
Author affiliation
Author has a previous
paper in SSRN
Panel A - Sample: scholar google
Average [stand deviation]
2012 x Expected Audience
Expected Audience
R-squared
N
All
Yes
No
Top 10
[1]
[2]
[3]
[4]
below Top 10 and
within Top 100
[5]
below Top 100
[6]
4.739
[11.810]
7.512
[15.538]
5.065
[12.211]
3.520
[9.662]
-0.005
[0.003]
-0.004
[0.008]
-0.001
[0.005]
-0.008
[0.004]*
0.005
[0.002]**
0.014
[0.009]
0.002
[0.003]
0.003
[0.003]
0.095
7,935
0.081
974
0.080
3,633
0.100
3,328
Panel B - Sample: SSRN and scholar google
5.312
[11.655]
4.862
[11.820]
5.884
[11.425]
8.290
[15.200]
5.808
[11.762]
4.178
[10.462]
2012 x Expected Audience
0.002
[0.006]
-0.013
[0.008]
0.020
[0.008]**
0.081
[0.033]**
-0.010
[0.007]
-0.010
[0.009]
Expected Audience
0.002
[0.003]
0.009
[0.007]
-0.008
[0.005]*
0.022
[0.022]
-0.010
[0.004]**
0.010
[0.005]*
0.210
2,034
0.237
1,138
0.314
896
0.366
196
0.174
899
0.177
939
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
no
yes
yes
no
yes
yes
no
Average [stand deviation]
R-squared
N
Controls
APSA dummy and conference year fixed effects
Session theme fixed-effects (N=131)
Author affiliation fixed effects (N=158)
Notes: The samples include only APSA articles. The dependent variable is the number of Google Scholar citations recorded by September 2014. The variable expected audience is explained in the
text. The sample in Panel A includes all articles which we find appearing on Google Scholar. The sample in Panel B includes APSA articles used in the main analysis which we find appearing on
Google Scholar. Other controls include conference year fixed effects, number of days in SSRN, the square of the number of days in SSRN, affiliation fixed effects and theme-fixed effects. Robust
standard errors clustered at the theme level are in brackets.
** Significant at the 5% level, * Significant at the 10% level
Table A4: Effects of Conferences on chance of articles have a Google Scholar citation by 2014
Author has a previous
Author affiliation
paper in SSRN
Panel A - Sample: Google Scholar
Average [stand deviation]
All
Yes
No
Top 10
[1]
0.431
[0.495]
[2]
[3]
[4]
0.540
[0.499]
below Top 10 and
within Top 100
[5]
0.451
[0.498]
below Top 100
[6]
0.376
[0.484]
2012 x Expected Audience
-0.001
[0.000]
0.000
[0.001]
0.000
[0.001]
-0.001
[0.000]**
Expected Audience
0.000
[0.000]*
0.001
[0.001]
0.000
[0.000]
0.000
[0.000]
0.115
7,615
0.142
931
0.092
3,483
0.119
3,201
R-squared
N
Panel B - Sample: SSRN and Google Scholar
Average [stand deviation]
0.562
[0.496]
0.523
[0.500]
0.611
[0.488]
0.673
[0.470]
0.601
[0.490]
0.501
[0.500]
2012 x Expected Audience
0.000
[0.000]
-0.001
[0.001]
0.001
[0.001]
0.003
[0.003]
0.000
[0.001]
-0.001
[0.001]
Expected Audience
0.000
[0.000]
0.000
[0.000]
-0.001
[ 0.001]
0.000
[0.002]
-0.001
[0.000]
0.000
[ 0.001]
0.213
1,955
0.238
1,096
0.310
859
0.397
187
0.183
854
0.174
914
yes
yes
yes
yes
yes
yes
yes
no
yes
no
yes
no
R-squared
N
Controls
Session theme fixed-effects (N=131)
Author affiliation fixed effects (N=158)
Notes: Same notes as Table A3.
The dependent variable is an indicator of whether the paper has at least one Google Scholar citation by September 2014.
Table A5: Effects of Conferences on number of SSRN views and citations
Author affiliation
Author has a previous
paper in SSRN
below Top 10 and
within Top 100
[5]
All
Yes
No
Top 10
below Top 100
[1]
[2]
[3]
[4]
2012 x Expected Audience
0.000
[0.000]
0.000
[0.000]
0.000
[0.001]
0.001
[0.003]
-0.001
[0.001]
0.000
[0.000]
Expected Audience
-0.001
[0.000]
0.001
[0.003]
-0.001
[0.001]
-0.001
[0.001]
0.000
[0.000]
0.000
[0.000]
0.232
2,041
0.290
1,105
0.287
936
0.439
201
0.256
894
0.164
946
2012 x Expected Audience
-0.216
[0.229]
-0.686
[0.430]
-0.297
[0.355]
1.233
[0.666]*
-0.500
[0.385]
-0.702
[0.468]
Expected Audience
0.151
[0.273]
0.414
[0.390]
-0.038
[0.239]
-0.682
[0.466]
0.188
[0.168]
0.488
[0.443]
0.207
2,685
0.231
1,485
0.320
1,200
0.326
256
0.179
1,165
0.227
1,264
yes
yes
yes
yes
yes
yes
yes
no
yes
no
yes
no
[6]
Panel A - Dependent variable: number of SSRN citations
R-squared
N
Panel B - Dependent variable: number of SSRN views
R-squared
N
Controls
Session theme fixed-effects (N=131)
Author affiliation fixed effects (N=158)
Notes: In Panel A, the dependent variable is the number of SSRN views by January 2014. In Panel B, the dependent variable is the number of SSRN citations by January 2014. All regressions
include covariates for conference year fixed effects, expected audience, number of days in SSRN, the square of the number of days in SSRN and theme-fixed effects. Regressions in columns 13 also include indicators for affiliation fixed effects. Regressions in columns 1 and 4-6 also include an indicator for whether any of the authors have a previous paper in SSRN.