This short note will allow for anyone to replicate the results and graphs from my paper One short note, these replication files make extensive use of .Rdata files to store the numerous nested lists that are needed to create the analyses in the paper. All .Rdata files are readable in R versions > 3.2. Should there be any files which cannot be read, email me at kamcal@umich.edu and I can attempt to put them in other formats that will be more generally readable. The CPU time needed to replicate the full analysis is quite long. For this reason, I have included numerous intermediate files should someone want to attempt to replicate or extend my analysis using the pre-calculated results. The data flow can be visualized as: HouseOuts > HouseResults > HouseSummaries Similar for the Senate. Instructions for Replicating Results: #Use RunAllSessions.R. This code will run the BPIRT model for all sessions of the U.S. House and U.S. Senate. 1) Install version of NPFA master that is used for this paper. (npfa-master) This is an R package that can be installed from source to run the code in further parts of this analysis. The code should work in all versions of R > 3.2. Code is optimized to an extent, but is not up to production standards. Stay tuned for a more optimized version of this code and a CRAN package in the coming weeks. 2) Pull and Save all of the data from NOMINATE service (this is handled within the provided script). For similar results, use the files provided for each session in HouseOuts2 and SenateOuts2. I cannot guarantee that the roll call records if pulled directly from Voteview will be the same. 3) Use saved data (which was locked in in 2018) to run BPIRT for all sessions of the U.S. House and Senate. The code in the script makes extensive usage of the foreach package in R. This allows for parallel computation of sessions. The algorithm is pretty quick, but later sessions of the U.S. Congress have many more votes than earlier sessions. These can take quite a while to run to completion with a proper adaptation, burnin, and capture time. This process returns 2 main outputs. First, the data used for each session of the U.S. House and U.S. Senate is included in the folders HouseOuts2 and SenateOuts2. These are stored as Rdata files. Second, the results of the MCMC output are stored for each session in HouseResults2 and SenateResults2. Again, these files are stored as .Rdata files. #Use SummarizeCongressResults.R 4) Summarize the output from the 464 MCMC procedures (2 for each of the 116 sessions of the U.S. House and Senate). This process takes the results for the converged MCMC estimation procedures and creates meaningful summaries for each session. These summaries include proportion of variation explained by each dimension, probability of unidimensional and multidimensional votes, comparisons to WNOMINATE, etc. This process can take a while since there are a few computations which are quite CPU intensive. All summaries using the posterior samples received from the locked-in data are included in the HouseSummaries and SenateSummaries folders. 5) Use the summaries to do analysis on the whole of U.S. Congressional history and the 107th U.S. House. #Use GenerateCongressSummaryPlots.R and Generate107SummaryPlots.R 6) Perform analysis on cloture votes. #Start by collecting all cloture votes. Two separate processes for different periods of time. Use getClotureVotes_65_100.R for the first set. Use getClotureVotes_101_116.R for the second set. #WARNING!!!!! The second set requires using the ProPublica API to pull legislative text summaries. Be careful using this procedure and make sure to use your own API key. #Use ProcessClotureVotes.R to go from raw votes to the data set needed for analysis. #Use BayesianAnalysisCloture.R to run the models and generate plots. 7) Perform analysis on final passage votes for Party Cartel model. #Start with ProcessCartelVotes.R to go from raw data to data set needed for analysis. #Use CartelAnalysiswithCuts.R to run the Cartel analysis. There are many intermediate files needed to go from start to finish. These are included in the replication data. You can run these yourself, but know that the models can take a long time to finish.