Recently I have been considering how to implement the persistence layer of the MoneyJar application. However, after looking at some examples online a new perspective hit me that will change how data is managed in the MoneyJar application.
Instead of importing the CSV file and parsing them into a list of transaction objects (as I have been doing) why not process the CSV file and dump it straight into a MySQL database?
This decouples the importing features from the MoneyJar application. Much like how I broke up the original application into several financial analysis tools and packaged by features – this decoupling works very much the same way i.e. the import features and behaviour can then operate as a standalone application for converting CSV transaction histories into MySQL tables or its APIs could be used as part of a larger framework.
To clarify the changes being made, these are the steps I am currently using:
- Convert CSV to Transaction Objects.
- Use transaction objects to display data in the application.
- Export transaction objects to MySQL table for data persistence.
Here is how it would look after the redesign.
- Convert CSV to MySQL table data.
- Create transaction objects from MySQL data.
- Display data.
What this does is reverses the flow of data. Instead of the MySQL data being populated by a list of Transaction objects, the transaction objects are populated by the database. This works better because we won’t be violating SRP.
By having a database and a working list of transactions, we were in effect having two model’s. Both of which would need to be managed and synchronized. By working solely from the database and using that as the model for a working list of transaction objects simplifies how the model will be managed.
By treating the database as the model also simplifies some of the features I wanted to implement. One of which was to retrieve a list of transactions between a range of dates. Now, instead of figuring out how to do this in java, it can be pushed into SQL using various SQL commands. This feature also helps solve some optimization issues, particularly in detecting duplicate transactions.
My current process for detecting duplication is to make a comparison between the incoming list and the existing list of transactions. This has an N^2 performance and has to make a comparison for every transaction we have ever stored which can become quite slow when we have a long history of transactions. But by using an SQL database as a model we can narrow the comparisons to a specific date – with the retrieval of the transactions of a specific date being a simple SQL command. i.e. SELECT * FROM transactions WHERE date=”12/01/12″; This reduces the comparison from thousands to a select few. Awesome.
Of course this means there will be a lot of restructuring of the MoneyJar application, or will there?
The import feature was packaged independently from the rest of the application anyway and there are plenty of tests. The code is clean – being the first package I applied the teachings of Clean Code. CSV parsing is already functional. So clearly I can re-use a lot of code I already have.
The only additional feature I essentially need is to convert each transaction line into a valid MySQL data type and add the line to the database – which I could probably do quite easily using straight forward JDBC.
Then I’d need a way to read, write, and delete transactions as needed when the user manipulates the working list. Which will be the next step.
The Potential Hiccups
One of the things I will need to investigate is how I am going to incorporate an import wizard. This wizard will be a dialog that allows the user to specify how each field/token/column in the CSV file maps to the database.
It will also allow the users to specify the database preferences/settings. And of course choose a file to import…