MongoDB and Spring Data

This blog will give the reader a decent start with writing a Spring-based application that writes to MongoDB, retrieves data via queries and finally runs a simple MapReduce query. All this using Spring Data MongoDB support.

The business case we will work with is – look through the 2012 Presidential contribution data set and get a total count of transactions per candidate. In trying to find a decently large data set, I stumbled upon the campaign contribution dataset at – http://fec.gov/disclosurep/PDownload.do. The data set is not really that large but I could not resist the temptation of actually playing with this data set. At the end of this article I have attached the Maven project and also a modified data set file. The file is in Excel CSV format. To import the data into the database I do have a loader in this project but it depends on another library of mine to convert the CSV to POJO. You will find instructions at the end of the article on that jar.

For starters first please install MongoDB. The instructions can be found at http://www.mongodb.org/display/DOCS/Quickstart. As you can see its pretty straightforward. I used my Windows machine – apologies to my Mac. Start MongoDB.

Next run the mongo.exe command to connect to the database via the shell. Here are some basic commands I used when writing this code.

  • show dbs (lists all the current databases).
  • use <databasename> (database name I used is contributionsdb).
  • show collections (collection name I use is contribution).
  • db.contribution.findOne() (finds one record and displays it).
  • db.contribution.find({candNm:”name here”}) (this will return many rows so use carefully or add more criteria).
  • db.contribution.getIndexes() (returns all the current index names).
  • db.contribution.dropIndex({candNm:1}) (drop existing index).
  • db.contribution.ensureIndex({candNm:1}) (create index).
  • db.contribution.count() (get count of records in the collection).
  • db.contribution.remove() (removes the collection).

Please note that most of the commands above are working against a collection named ”contribution’.

Now lets move to the Spring part. This is a Maven project. First the spring-config.xml in resources folder.

Please review the code in DataMinerImpl.java for the actual code that loads the data and queries the database. I am not going to repeat that here…except for some key points noted below.

  • Using @Component to register the bean.
  • Using @Autowire to inject the Spring Data MongoTemplate instance into the class.
  • All data operations are performed on the MongoTemplate
  • Save: mongoTemplate.save(object);
  • Query: Retrieve count of records by candidate name:asasas

 

To load the data into database execute the program – LoadDataIntoDB. This class is located in the Maven test folder. This could take a few minutes since there are 800,000 rows to be inserted and we are running on a single machine.

[crayon-5b4f02695f9e0273831795/]

Once the data is loaded execute the JUnit test – MongoDBSpringDataTest. The class is noted below…

[crayon-5b4f02695f9e3057891631/]

MapReduce. Again if you dont understand what MapReduce is please check the web for that. I assume here that you know it.

MongoDB MapReduce scripts are written in JavaScript. You provide a Map function and then a Reduce function – both in JavaScript. Here is the Map function from source file map_by_candidate.js

Here is the reduce function from source file reduce_by_candidate.js

Map function recieves all the records on that node and you can decide whether or not you care for that data. For those that you care use the emit function to select it. Once the Map function has executed the data is merged together and sent to the reduce function as key and values. We iterate over the values and increment the count and the dollar amounts.

The output will look something like this…

 

To run the code…

  • Install MongoDB. Please refer to the MongoDB site for instructions.
  • Download my CSV reader flatfilereader from GitHub – https://github.com/thomasma/flatfilereader.
    • Run “mvn -Dmaven.test.skip=true clean package” to create the jar file.
    • Use mvn install:install-file to install library to your local maven repository => mvn install:install-file -Dfile=flatfilereader-0.8.jar -DgroupId=com.aver -DartifactId=flatfilereader -version=0.8 -packaging=jar
    • You can now build the main mongo project since it has a dependency on this library.
  • Download the code from this blog from GitHub – https://github.com/thomasma/mongo_campaign_finance. Open project in Eclipse and run
  • Download the latest data file directly from fec.gov (ALL.zip) – http://fec.gov/disclosurep/PDownload.do.
  • In Eclipse select the class LoadDataIntoDB and change location to the downloaded data zip file. Ensure mongodb is running. Run LoadDataIntoDB. If all works this will load the records into the database.
  • Run  MongoDBSpringDataTest to test some functions.

One final note. My example runs on a single instance of the MongoDB server. The data set is not large enough to require sharding/partitioning. I will though, time permitting,  give that a shot sometime soon. If you did have sharding turned ON, the example should work exactly as-is.

2 thoughts on “MongoDB and Spring Data

  1. Afnan

    If someone want to import the data file (http://54.89.99.169/?attachment_id=513) into mongo … here are the steps.

    save this file as data.txt

    perl -p -e “s/\|/\t/g” data.txt > new.tsv

    and append this line on top of new.tsv to make field names.

    cid pid company name source state phone status1 status2 cost joindate t1 t2 t3 t4 t5

    and then finally import.

    mongoimport –db scratch –collection emp –type tsv –headerline –file new.tsv

    Hope it help someone.

    1. Afnan

      Following fields are tab separated (removed by wordpress form)

      cid pid company name source state phone status1 status2 cost joindate t1 t2 t3 t4 t5

Comments are closed.