After doing some digging on the interwebs, I found several posts about how the css margin and padding properties could be creating this issue. Well, I tried those solutions and none of them worked for me. Then I noticed that the .well
class from Twitter Bootstrap included the min-height
property. So I overrode that property and set it to 0px
and my problem was solved. Here is the jsFiddle that includes the solution: http://jsfiddle.net/MUVVh/1/.
Metaprogramming is both fun and challenging all at the same time. Metaprogramming with Ruby is easier than some other languages due to its dynamic nature. However, testing metaprogamming code can be a real challenge especially covering various edge case scenarios. In these situations, I’ve come to appreciate RSpec Shared Examples to make testing metaprogramming code easier.
For the purposes of this post we’ll start with a simple Ruby class and modify it, along with the specs, to include metaprogramming and shared example.
Let’s start with a simple class that defines one method that returns a string. This is a trivial example, but in the next section we’ll add some metaprogramming to allow the method to be created on initialization of the class.
1 2 3 4 5 |
|
Here is the corresponding example that validates the functionality.
1 2 3 4 5 6 7 |
|
Building upon the previous version of my_class
, let’s enhance the code and the specs to define the method when the class is initialized. The method still returns a string that matches the method name.
1 2 3 4 5 6 7 8 9 10 |
|
The specs for this class get a little more challenging, because we need to handle the case when there is no method_name
passed in and when there is one passed in.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
These specs are pretty straight forward. The first and the third examples ensure that the correct method is created. The second and fourth examples ensure that the method returns the correct string value.
What I don’t like though is the duplication of code. For each scenario I want to test I need to add two more examples. This will only further compound as the functionality of the code grows.
In order to DRY up the specs and to allow for easily adding other scenarios to test, I’m going to implement RSpec Shared Examples.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
You can see in this example that the specs no longer repeat in the specs. All that I have to do is to call it_behaves_like "a dynamic my class"
for each scenario that I want to test. I also added a third scenario that tests the method name of :your_dynamic_method
with one line of code, demonstrating how easily we can add another scenario.
By passing the method_name
into the shared example, we can use the method_name
parameter to ensure that the examples can test a variety of scenarios.
Normally I wouldn’t include the shared examples in the same file as the specs. I would move these to a support/my_class_shared_examples.rb
file and require that file in the rspec_helper.rb
file or the individual spec files themselves to keep the spec code tidy.
By using shared examples as described above, not only is your code DRY, but there are a couple of added benefits. First, adding additional scenarios to test requires adding one line of code to your existing specs. Second, each time that you add an example to the shared example, you can be confident that it works in all scenarios.
When I wanted to change the configuration in my CanBe gem to allow anyone using the gem to pass in the details
association name, I turned to shared examples to ensure that the metaprogamming required is working properly. It greatly reduced the time to implement this functionality and ensured that it was working correctly without breaking existing functionality.
All of the code for this example can be found in my rspec_shared_example_post repo on GitHub.
]]>A customer is defined as “a person who purchases goods or services from another; buyer.” However, in the age of free services such as Google and Facebook, I would like to redefine a customer as “a person who makes use of goods or services from another” for the purposes of this post. This new definition of a customer would also include the company owner, product owners and other stakeholders of the project that you are developing.
In today’s environment we can develop software as little bits of functionality and deliver them almost immediately to our customers, both internal and external. No longer should we be developing in a vacuum, delaying delivery on a piece of functionality until it’s “complete.” Continuous Integration and Continuous Deployment offer the customers, of a software product, early access to the functionality that is being developed.
With this early access, the customers of a product can help to shape the features and functionality very early on and ensure that the correct product is delivered to the paying customers when it is finally released to the world. From a developers perspective, this might be frustrating because of the constant rework and changing requirements. However, there is an upside for the developers of the product as well. Once the product is finally delivered to the paying customers, there is a greater chance of it being used and increasing the revenue generated by the product. This can be infinitely more rewarding, to create software that is used as opposed to creating software that no one wants to use. Also, once the product is released to the paying customers and they are using it, there will not be a big push to quickly change the code to meet the needs of the customers.
If you are developing a software product, you should be releasing it to your customers as soon as there is anything visible that they can use. Enjoy the feedback that you get and take solace in the fact that you will developing a quality product that has a greater chance of succeeding.
]]>Still, I didn’t want to have to worry about creating the database and ensuring it was available every time the specs were run. Then I found out that you can create an in-memory SQLite database. This was a perfect fit!
There are only a few things you need to get this up and running. I am using RSpec for my testing, so all of the configuration below is in that context. First, you will need to add this line to your spec/spec_helper.rb
file.
1
|
|
Now you have a database that you can run your migrations against and use your regular Active Record models. Here is an example migration that you could create in the spec/support/schema.rb
.
1 2 3 4 5 6 7 8 9 10 11 |
|
To actually run them you need to load the file. Add this line to your spec/spec_helper.rb
file.
1
|
|
Next, create your models. I did this in a spec/support/models.rb
file.
1 2 |
|
Once, you require the models.rb
file into your spec/spec_helper.rb
file, you will have access to the Address
model in your specs and it will function as a fully functioning Active Record model.
1
|
|
Using an in-memory SQLite database helped me ensure that the CanBe gem works against fully functioning Active Record models.
]]>However, at the time of learning, I wasn’t shipping anything with the knowledge, personally or professionally. To some degree, what I learned went stale and I had to re-learn some of the basics when I started using it for my job.
Lately, I’ve begun to change that. I’m still learning, but I starting to ship more personal stuff with what I’m learning. How did I do it? I put more focus on shipping instead of learning. Since I’m not an expert in everything, I always learn something new when creating anything personal. For example, I wanted to change the domain name for my blog and switch the blogging engine that I was using. You are reading this post on the results of my efforts. This blog is using Octopress. I had to learn how Octopress works and how to incorporate some of the changes that I wanted. Nothing monumental, but at least I shipped something.
Another example of shipping for me is when I created my first Ruby Gem, CanBe. I learned more about the Ruby language and how to integrate a gem with Ruby on Rails, specifically Active Record. Yes, I learned something, but more importantly, for me, I shipped something.
I’ve been able to discern that learning and shipping aren’t mutually exclusive, rather they can be complementary. For me, it’s just where I put my focus. So my new plan is to focus more on shipping and push myself with each project to learn something new. This will enable me to focus on completing a project by being able to quickly use technology that I am familiar with and still choose one aspect of the project to learn something new. As I wrote in my post, Keeping an Eye on Productivity, it is important for me to still have the focus on shipping instead of learning.
For my next project, I want to focus on having a more reactive user experience on the client side of a web app. I have narrowed it down to two technologies that I want to use, Ember.js or Meteor. Given my new process of shipping over learning, the decision is pretty easy, I will be using Ember.js. I will use Rails for the backend. This will allow me to focus on delivering the functionality that I want because I am already familiar with many of the languages and tools, while still learning Ember.js. If I chose Meteor, I would have to learn how to code the front-end and the back-end.
My suggestion to all developers who work on side projects, is to determine what is important to you, shipping or learning, and focus on that. Just because you pick one over the other doesn’t mean that the other can’t happen. Rather, you just won’t get as much accomplished on that front as fast as you might like. In my opinion, you will have more personal satisfaction by picking one.
]]>You could also use ActiveRecord Single table inheritence to implement the functionality that met my requirements. That’s when my data modeling and DBA experience kicked in. Using STI would result in database rows with many null
values in them. This didn’t feel right to me.
That’s why I wrote the CanBe gem. With the CanBe gem, you can declaratively define the possible types for any given instance of your model. Initially, it just provided many helper methods to ensure consistency. Then I enhanced it to allow for different attributes depending on type of model instance that you are accessing. To accomplish this, I used polymorphic associations to connect up different models that represent the specific attributes required for the instance type.
As of version 0.2.1 of the CanBe gem, I have implemented most of the functionality that I wanted to include. There are still a few more tweaks that I would like to make. I am hoping that other developers will find the functionality useful. If you want to see functionality, that is not currently included, feel free to contact me or submit a pull request. Contributions are always welcome and appreciated.
Writing my first gem has been a great experience! It furthered my knowledge of the Ruby language and how to integrate with Ruby on Rails, specifically ActiveRecord. I was able to apply TDD techniques to ensure the gem didn’t break as I added new functionality. I worked with Travis to ensure that my code was integrated properly when changes were made.
I hope that others find the CanBe gem useful for their projects!
]]>It is, but it’s something I’ve wanted to do for a long time. However, I have one really big problem - I’m not confident in my writing ability. I’ve always avoided writing and it’s always been challenging for me. This goes back to grade school for me. That’s all about to change!
I’m going to start writing on this site more.I’m a life long learner and I tend to focus a lot on learning new technologies, specifically software development techniques and programming languages. For the most part, I will write about technology related information, sharing helpful tips as well as my many opinions on software development and how things should be done.
Why do I think that writing more will help be to become more confident in my writing? I have written software in many languages. C#, Ruby, HTML, JavaScript and SQL just to name a few. I have only gotten better at them by writing more software. Some of the software I have re-written to learn a new language or to apply new techniques that I have learned. It’s my opinion that the same can happen with the English language. Only time will tell if my assessment of how to become more confident in my writing abilities is accurate.
Becoming a better writer will help me both personally and professionally. Personally, becoming a better writer will enable me to write much longer works, like a book, and allow me to meet some long-term personal goals. I have no clue what I want the book to be about. I’m sure that will eventually work itself out.
On a professional note, I spend a portion of my time documenting the systems for which I am responsible and being a better writer will help to produce better documentation. By better documentation, I mean it will communicate the information in a more effective and efficient manner. Also, for work, I am constantly on HipChat and Skype and it is important to be concise and to the point as to not waste the readers time.
I hope you enjoy what I write and I look forward to your feedback!
]]>Fortunately, the URL scheme that Backbone.js uses for syncing data matches perfectly with Rails’ Resource Routing. However, this relies on the JSON that is sent to a Backbone.js model to have a property of id
. But the JSON that is emitted form Rails/Mongoid/mongoDB does not have that property. It has an _id
property to represent the id
of the document.
I have found two ways to solve this problem. First, as this blog post suggests, you can modify the JSON that is emitted. I don’t care for this approach, because this then modifies the every request for JSON, unless you override what you have already overridden. I prefer the approach of letting Backbone know how to interpret your data.
This can be done by setting the idAttribute
property within you model to "_id"
, see the example below.
1 2 3 |
|
Once you have set this in your Backbone.js model, all of the syncing functionality works as expected. Also, when you need to access the id
property of your Backbone.js model it works with the id
property that is provided by backbone. This setting is not documented on the main Backbone.js page, but you can find it in the annotated source code here.
However, I wasn’t being very productive and moving the project forward. Instead I spent most of my time reading blog posts and writing prototype code. Also, all of research and prototyping wasn’t helping me become more proficient with Rails and Backbone.js.
So, I decided to change my tact and focus on the project at hand. I only went out perusing the interwebs to find what I needed when I needed. It’s been about a week now that I’ve been doing this and I have written many times more code than I did in the previous weeks. The project is now moving much faster and I am learning much more.
I must say that this process isn’t producing the prettiest or most efficient code, but I am fairly in-experienced with Rails and Backbone as compared to other languages that I know. And let’s be honest, very few people care about the quality of the code, only other developers that have to maintain the code and your future self. What the majority of people care about is, “does the product do what it’s supposed to do.”
I plan to keep up working in this manner to complete the project and to be more productive. I am becoming more proficient with Rails and Backbone.js. I also feel that I am in a better position to evaluate if and when to use a gem to accomplish a given task.
How do you balance “keeping up” vs. actual productivity?
]]>__getattr__
that gets executed when a field is not found. But I did run into one gotcha!
In my __getattr__
method, I was determining the value of the property that was being accessed from an internal dictionary that stored the values of the model fields. However, something really strange was happening. The __getattr__
method was being called for properties that were defined directly on the class. An example of this can be seen here.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
It turns out that what was happening was that I was trying to access the boom
property from the Example
class and it was trying to be retrieved from the __getattr__
method implementation. This threw me for a loop because I am still fairly new to Python.
From the example above you can see that the foo
property works correctly, but the boom
property runs the __getattr__
method when accessed. If an exception is thrown in the @property
method, the Python class will revert to using the __getattr__
method. Once I addressed the underlying exception everything worked properly.
I chose Eco templates for two reasons. First, they utilize CoffeeScript for their processing. Secondly, they can be complied to JavaScript on the server. To me, this is a benefit over many of the template engines that exist that require some sort of markup to be sent to the client and then compiled to be utilized by Backbone.js. The other benefit of using the Eco templates is that I can also use them as the template engine to render the HTML views trough Express.
The connect-assets library really seemed to do what I was looking for. It combines all of you JavaScript and CoffeeScript per the directives that you specify. It defers to Stylus to handle all of the CSS processing. Under the hood it uses a secondary library, snockets, to process all of the JavaScript and CoffeeScript. However, it does not handle Eco templates natively. Also, it will not allow you to provide multiple file extensions to handle processing a single file through multiple processors, for example template.jst.eco. Here is the code that I came up with to handle processing Eco templates and provide them in a global JST object. The JST object is created in a similar way that Jammit, by DocumentCloud, does it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
This code does work for my needs and gets me the global JST object with all of the Eco templates compiled to JavaScript and stored in the JST object. You will just need to put the following code into your view to have connect-assets render the correct JavaScript.
1
|
|
Ultimately, this code and the processing of multiple file extensions should be handled in snockets. This may be something that I will contribute to the project in the future.
]]>In the example below, I have defined a Person
class and a PersonWithBirthDate
class which derives from the Person
class. These can both be stored in the same collection in a mongoDB database. In this case, assume the documents are stored in the Persons
collection.
1 2 3 4 5 6 7 8 9 10 |
|
You can easily retrieve and of the documents and have the mongoDB C# Driver deserialze the document into the PersonWithBirthDate
class. However, you will run into issues if you query the Persons
collection and try to have the driver deserialize the data into the Person
class. You will receive an error that there are elements that cannot be deserialized.
This easily fixable by adding the [BsonIgnoreExtraElements]
attribute to the Person
class. You can see the modified class below. This will instruct the driver to ignore any elements that it cannot deserialize into a corresponding property. With the attribute any document in the Persons
collection can be deserailized to the Person
class without error.
1 2 3 4 5 6 7 8 9 10 11 |
|
There is a gotcha that I recently found while trying to implement a scenario similar to the one above. In the source code for the mongo C# driver, the attribute is defined in a way that it can be inherited to the child classes when applied to the parent class (BsonIgnoreExtraElementsAttribute.cs). However, when the attribute is read, it ignores the inheritance of the attribute (BsonClassMap.cs) and does not get applied to the child classes. I agree with this implementation, but it’s a little confusing if you review the source code for the definition of the [BsonIgnoreExtraElements]
attribute. Even with this inconsistency, all you need to do is to apply the [BsonIgnoreExtraElements]
to each class that may read a document from a collection where there are extra elements.
Implementing a many-to-many relationship in a relational database is not as straight forward as a one-to-many relationship because there is no single command to accomplish it. The same holds true for implementing them in mongoDB. As a matter of fact you cannot implement any type of relationship in mongoDB via a command. However, having the ability to store arrays in a document does allow you to store the data in a way that is fast to retrieve and easy to maintain and provides you the information to relate two documents in your code.
In the past, I have modeled many-to-many relationships in relational databases using a junction table. A junction table is just the table that stores the keys from the two parent tables to form the relationship. See the example below, where there is a many-to-may relationship between the person
table and the group
table. The person_group
table is the junction table.
Using the schema-less nature of mongoDB and arrays, you can accomplish the same data model and create short query times with the appropriate indexes. Basically, you can store an array of the ObjectId
’s from the group
collection in person
collection to identify what groups a person belongs to. Likewise, you can store an array of the ObjectId
’s from the person
collection in the group
document to identify what persons belong to a group. You can also store DBRef’s in the array, but that would only be necessary if your collection names will change over time.
This is how some person documents would look in mongoDB.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
This is how some group documents would look in mongoDB.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
The documents above show that “Joe Mongo” belongs to the “mongoDB User” and “mongoDB Administrator” groups. Similarly, “Sally Mongo” only belongs to the “mongoDB User” group. Effectively, these two arrays make up the data that is stored in the person_group
table from the relational database example.
If you choose, you can just create the appropriate array on either the person documents or the group documents, but that would make your queries somewhat more complicated.
The following queries will show you how you can query the data without having to use joins as you would in a relational database.
1 2 3 4 5 6 7 8 9 10 11 |
|
In order to improve the performance of the queries above, you should create indexes on the person.groups
field and the groups.persons
field. This can be accomplished by using the following commands.
1 2 3 |
|
In general it’s pretty straight forward to be able to implement a many-to-many relationship in mongoDB. I have shown one method of doing this that closely resembles how it’s done in a relational database. One thing to keep in mind when it comes to data modeling, there is no silver bullet and you should always create the most appropriate data model that meets the needs of how you data will be queried.
]]>