Category Archives: All

Welcome To TextEx!

landingIntroduction

Last time, we created the layout for our views using Twitter Bootstrap.  However, these layouts, while pretty, did not quite do anything yet as the tables were just filled with dummy data and the buttons were not bound to any actions.  Today we shall take the final step in our development process as we make these pages useful by tying the actual functionality of TextEx application to this front-end through the use of the Play framework‘s view templates.

From Mock-up to View

Not surprisingly, the Play framework has some tools to make the development of our views (the V in our MVC application) a bit simpler.  Like most web applications, the views of a Play application are standard HTML and CSS, but Play enhances this through the use of Scala templates.  These templates are a mix of HTML and Scala (hence the scala.html file type) and what this allows us to do is intersperse logic in our front-end HTML by inserting little snippets of Scala code.  All we have to do is mark the Scala code with a @.  This is especially nice when we only want certain components to show up in certain situations.  For example, the following snippet shows the sign in button only if the user is not logged in.

@if(session.get("username") == null) {
  <a class="btn btn-primary pull-right btn-small" id="signinButton" role="button" href="#signin" data-toggle="modal">Sign In</a>
} else {

@("Logged in as: " + session.get("username"))     <a class="btn btn-primary btn-small" id="logoutButton" href="@routes.Application.logout()">Log Out</a>

}

In the snippet above, we check the user’s session variables to see if he or she has logged in (the username variable will be set by the log in function) and output the appropriate HTML based on whether the user is logged in or not.  This can also be easily extended to handle restricted controls that only registered users can access as all we would really need to do is change the HTML in either branch.  Of course, you could do a lot more as Scala is a fully-fledged programming language and you have its full power at your disposal.  For some common use cases, check out the documentation.

Not only that, but you can bind data from your Java controllers by giving your templates parameters.  Want to make a table of all of your search results?  Just give your template a List parameter and use a for loop to create your rows!  You can even pass more HTML to the template to make a main template that will be applied to all of your files.  This is great as it makes it very easy to handle reused components.  Finally, Play provides a bunch of helpers that you can use to simplify form creation.  You can even overwrite the default field constructors with your own custom ones to make sure that the generated components match the style of your form.  To see all of these ideas in action, check out the search view of the TextEx application which is shown below.

@(loginForm: DynamicForm, searchForm: DynamicForm, books: List[models.Book])

@import helper._
@implicitFieldConstructor = @{ FieldConstructor(twitterBootstrapInput.f) }

@main("Browse Books", "Browse Books", loginForm, "search") {
<script type="text/javascript" src="@routes.Assets.at("></script></pre>
<div class="row-fluid">
<div class="span3 well">
<h2>Search for Book</h2>
 @helper.form(action = routes.Book.search(), 'id -> "registerForm") {
 @helper.inputText(searchForm("isbn"), '_label -> "ISBN", 'id -> "isbn")
 @helper.inputText(searchForm("name"), '_label -> "Title", 'id -> "title")
 @helper.inputText(searchForm("authors"), '_label -> "Authors", 'id -> "authors")
 @helper.inputText(searchForm("publisher"), '_label -> "Publisher", 'id -> "publisher")
 @helper.inputText(searchForm("edition"), '_label -> "Edition", 'id -> "price")
 @helper.inputText(searchForm("price"), '_label -> "Price", 'id -> "price")

 <button class="btn btn-primary" id="searchSubmit" type="submit"><i class="icon-search icon-white"></i> Search</button>
 }</div>
<div class="span9">
<div>
<h2>Search Results</h2>
</div>
@if(books != null) { @for(book } }
<table class="table table-bordered">
<thead>
<tr>
<th>ISBN #
 <button class="btn btn-primary btn-sort" id="resultsIsbn" onclick="BrowseBooks.sortClicked(event);">
 <i class="icon-arrow-down icon-white" id="resultsIsbnIcon" onclick="BrowseBooks.sortIconClicked(event);"></i>
 </button></th>
<th>Title
 <button class="btn btn-sort" id="resultsTitle" onclick="BrowseBooks.sortClicked(event);">
 <i class="icon-arrow-down" id="resultsTitleIcon" onclick="BrowseBooks.sortIconClicked(event);"></i>
 </button></th>
<th>Authors
 <button class="btn btn-sort" id="resultsAuthors" onclick="BrowseBooks.sortClicked(event);">
 <i class="icon-arrow-down" id="resultsAuthorsIcon" onclick="BrowseBooks.sortIconClicked(event);"></i>
 </button></th>
<th>Edition
 <button class="btn btn-sort" id="resultsEdition" onclick="BrowseBooks.sortClicked(event);">
 <i class="icon-arrow-down" id="resultsEditionIcon" onclick="BrowseBooks.sortIconClicked(event);"></i>
 </button></th>
<th>Base Price
 <button class="btn btn-sort" id="resultsBasePrice" onclick="BrowseBooks.sortClicked(event);">
 <i class="icon-arrow-down" id="resultsBasePriceIcon" onclick="BrowseBooks.sortIconClicked(event);"></i>
 </button></th>
<th>Publisher
 <button class="btn btn-sort" id="resultsPublisher" onclick="BrowseBooks.sortClicked(event);">
 <i class=" icon-arrow-down " id="resultsPublisherIcon" onclick="BrowseBooks.sortIconClicked(event);"> </i>
 </button></th>
</tr>
</thead>
<tbody>
<tr>
<td>@book.getIsbn()</td>
<td><a href="@routes.Book.details(book.getIsbn)">@book.getName()</a></td>
<td>@book.getAuthors()</td>
<td>@book.getEdition()</td>
<td>$@String.format("%.2f", Double.valueOf(book.getPrice()))</td>
<td>@book.getPublisher()</td>
</tr>
</tbody>
</table>
</div>
</div>
<pre>
}

The search view uses a template parameter to handle the search results (the book parameter) and it uses a main template to handle the application’s shared components.  It even uses a custom field constructor to make the search fields. The custom field constructor is set on line 4 and the source for the constructor may be seen below.

@(elements: helper.FieldElements)

@*
Generate input according twitter bootstrap 2.0 rules.
To use, include the following in your view file:
  @implicitFieldConstructor = @{ FieldConstructor(twitterBootstrapInput.f) }
*@</pre>
<div class="control-group @if(elements.hasErrors) {error}"><label class="control-label" for="@elements.id">@elements.label</label>
<div class="controls">
@elements.input

@elements.infos.mkString(", ")

 @if(elements.hasErrors) {

@elements.errors.mkString(", ")

 }</div>
</div>
<pre>

Once all of the templates have been set up, using them is as simple as calling the render method of the generated class. The following code snippet generates the default (empty) search page using the search template shown above.

  /**
   * Takes the requester to the default search / browse books page.
   *
   * @return A 200 {@link Status} to the default search / browse books page.
   */
  public static Result search() {
      return ok(search.render(new DynamicForm(), new DynamicForm(), null));
  }

Issues
Overall, I found transitioning the mock-ups to actual Scala templates to be rather straightforward once I learned the syntax.  The biggest problem was figuring out what parameters to use for the templates.  Many times, I wanted to show a success or error message based upon the results of the request and I first went about this by adding a bunch of Boolean parameters.  This approach worked, yet it quickly became cumbersome as the parameter lists started to become rather long.  Consequently, I started to use session and flash memory to transmit these flags to get the same results with much cleaner templates.  I also abandoned the use of the helpers at some points just because copying my HTML directly from the mock-up is much simpler than trying to fit it into a helper.  I do want to extract those forms into helpers and field constructors at some point though just because it would be better practice to do so.  Nonetheless, I really like the Scala templates as it seems really powerful, especially because it is server-side so that I can restrict what components the browser gets without having to resort to crazy client-side Javascript.

Final Thoughts

In conclusion, my experience with Play has been mostly positive despite a few problems (the same cannot be said for Divshot).  On one hand, there were some strange and frustrating issues with the Ebean ORM and how it handled result retrieval, but on the other hand the Scala templates and overall support for the MVC architecture makes Play a somewhat simple, yet powerful framework once you figure out its quirks.  Unfortunately, I was not quite able to finish the TextEx application at this point due to time constraints.  It currently only supports registering, logging in / out, and adding / searching for books and the whole request and offer portion of the system is incomplete.  Furthermore, there are a bunch of small features that I would like to include (i.e. sorting search results through client-side Javascript).  Even so, I am confident that I can finish it when I have some free time as the remaining work is quite similar to what I have already completed.  In addition, I am planning to use the Play framework for a personal project this Summer (I plan to reboot the UH Band Attendance Database project for a friend).  As a result, I think my future with Play is looking quite bright. Now the only question is do you want to Play?

Trying To Create My Views: An Exercise in Frustration

Flippin Tables!

Introduction

Interface design can be really cool because there are so many different approaches that you can take and figuring out which approach best suits your needs is a very fulfilling mental puzzle.  In addition, seeing your creation come to life before your eyes is always satisfying.   This week, I designed the interface for the TextEx application.  Like many web interfaces, TextEx’s interface is built using HTML and CSS on top of the Twitter Bootstrap framework which I have used in the past (see WattDepot-SPA). Consequently, I was hoping that doing interface design for my TextEx project would be a fun and relaxing experience, but unfortunately that was not the case as the editor I was using, Divshot, caused so many unnecessary errors that I really did want to flip some tables.

Beta is Beta…

Before I start tearing it apart, what is Divshot?  Divshot is an online HTML / CSS / Javascript web page editor with built in Twitter Bootstrap support.  Divshot aims to make web development easier by providing a WYSIWYG page designer that allows you to drag and drop Twitter Bootstrap components into your page in addition to a code editor.  The nice thing about this set-up is that you can set it to split screen mode with the visual designer above the code editor which allows you to see your changes in real time.  The result is something very similar to Microsoft Visual Studio’s visual editor, but for HTML instead of WPF or WinForms and without the fancy auto-completion support.  This all sounds great, but I think these awesome features might not be worth the headache at this point.  Divshot is currently in a public beta phase and while its core functionality works (the designer and code editor integration is perfect!)  there still seems to be many bugs and quirks.  Of course, bugs should be expected in beta and hopefully these issues will be resolved before it is released as Divshot has a lot of potential once these problems are removed.  Nevertheless, I will list the issues I encountered to hopefully inform you of what to expect if you do try it in its current state.

The first thing you should know is NEVER USE THE REDO / UNDO BUTTONS.  Divshot has a redo and undo button like most IDEs.  It even binds standard keyboard shortcuts to these commands (ex. CTRL + Y = redo and CTRL + Z = undo if you are running Windows).  However, using these commands currently has really strange functionality.  You typically would expect undo to undo the last thing you did, but it seems to jump back to seemingly random point which can be many edits ago.  No problem, we have the redo command right?  Nope.  Sometimes hitting redo after the random skip does nothing which means that you just lost all of that work!  This is especially frustrating when it takes you all the way back to the starting design.  As a result, just stay away from these commands unless you really do not mind redesigning your current page from scratch again.

Another issue I encountered is that Divshot can fail to save your changes.  Divshot does automatically save in the background as you work (think Google Docs) and it also lets you explicitly save by pressing CTRL + S (on Windows).  Yet, sometimes your changes disappear when you navigate away from the page even if you did the explicit save.  At one point, I was working on several pages and did not realize that the save was failing on each page because the save confirmation pop up was showing up for every single page.  It was only when I went back to one of the supposedly edited pages that I realized all of the work I just did was lost.  To figure out what was going on, I kept making small changes between two pages and the changes were never saved (the pop up kept saying it was saved though!) before just giving up and refreshing the browser.  Apparently that worked and I could keep working like normal.  Luckily, there is a simple fix here, but losing all of that work and time was annoying at the very least.

If those problems were not enough, Divshot’s preview feature is rather strange as well.  Divshot allows you to preview your current page as pure HTML (without WYSIWYG support) so you can test things like Javascript and events.  Nonetheless, the preview has an odd habit of loading with incomplete tags / script.  For instance, you might add a new Javascript function to your page and add it to an onclick handler, but when you load the preview, only half of the script is there which causes a Javascript syntax error.  Similar things can happen with the HTML as well with missing / incomplete tags resulting in a malformed page.  This can normally be fixed by reopening the preview, but it is just annoying to have to open it twice.  An alternative solution is to just leave the preview window open and hitting save in the editor as this seems to force the preview to update.  Still, it is just annoying.  When I want to preview something, I want to preview what I currently have in my editor, not a malformed part of it!  I think this is an issue with the automatic saving mentioned in the previous paragraph and I really hope it gets fixed because it makes debugging Javascript in Divshot excruciatingly annoying.

Finally, Divshot seems to randomly destroy my HTML tags as I would be working on several different pages only to come back to a page to see a mess because one of the HTML tags or its attributes was malformed.  I have no idea why it happens (maybe I am just not paying attention?) and fortunately, it seems to happen very infrequently, but it is just a pain to have to fix the mess caused by random changes.  There you have it, those are the table-flip inducing problems that I encountered while designing my views.  While I will not pin too much blame on Divshot as it is still in beta, it did cause me more headaches than a more standard two window browser + IDE combination would and I feel that something is worth pointing out.

The Design

The TextEx landing when the user is logged out.

The TextEx landing when the user is logged out.

The TextEx landing when the user is logged in.  Note the extra options at the top.

The TextEx landing when the user is logged in. Note the extra options at the top.

Now that I have torn Divshot apart, what did I actually make?  I am not the most artistic person and my design is rather plain as a result.  Instead, I chose to focus on functionality.  The first thing of note is that each page has a different form depending on whether the user logged in or not.  Basically, all a user can do is search if he / she is not logged in as all other actions could be abused if logging in was not necessary.   Note how the navigation menu at the top of the landing page changes according to the user’s log-in status.

Signing into TextEx is simple!

Signing into TextEx is simple!

Logging in is simple as all the user has to do is click the sign in button and a modal containing the log in form will appear.  Once the user logged in, the modal will disappear and the user will have full access to TextEx’s features.

Simple form for adding a new book.

Simple form for adding a new book.

One basic function is to add a new book to the system.  TextEx uses a simple form to take all of the relevant data.  It even has an image uploader so that the user can provide a cover shot of the book.

Search for all books by doing an empty search!

Search for all books by doing an empty search!

Another basic function is the ability to search for books.  The search form is to the left and the results will populate a table to the right.  This keeps the form and the results in close proximity which should make searching easier for the user.  Note that the book names in the table are a link.  By clicking on the link, the user can see the book’s page.  Also note that a search without any fields will return all books.

The top of the book's page has its information.

The top of the book’s page has its information.

The bottom of a book's page has the current offers and requests for the book.

The bottom of a book’s page has the current offers and requests for the book.

Each book’s page contains almost everything the user needs.  At the top is the book’s information.  Below that are tables listing the current offers and requests for this book.  If the user is logged in, the editing and contact buttons will be visible.   The user can also add a new offer or request from this page by filling out the form at the bottom of the respective table.

Lists all of your currently posted offers and requests with the ability to delete / modify them.

Lists all of your currently posted offers and requests with the ability to delete / modify them.

Finally, all of the user’s offer and request information can be viewed in one centralized location by going to the My Offers / Requests page while logged in.  Here, the user can edit or delete any offers that are currently posted using the appropriate buttons and filling in the fields of the modal that appears.

Conclusions

All in all, making the views was a much more painful process than I had anticipated, especially given my experience with Twitter Bootstrap.  Divshot looks like a great tool if it worked, but in its current beta form, it just causes more problems than its worth.  Once Divshot was out of the way, the rest of the design process was quite smooth besides a Javascript bug or two.  For those of you adventurous folk who want to give Divshot a chance, you have my warning.  Regardless, I hope you learned a little something today and please join me next time as I hook up this interface to my actual application.  Have fun!

C is for Controller

Introduction

Welcome back!  Last time, we started our journey into the MVC software architecture by designing our models for our Play application.  This was nice because we were able to define our database tables and do some querying on those tables using pure Java without having to write any SQL.  However, these tables are useless if no one can use them.  As a result, we need to define the controllers that allow the user to access and change the underlying models.  Of course, this will all take place under-the-hood as we will hide the code behind a shiny graphical user interface (more on that next time!), but for now we shall focus on implementing the functionality that the GUI will rely on. As with last time, I will be exploring this part of the architecture using my TextEx textbook exchange application.

Controllers in Play

Implementing controllers in Play is rather straightforward in theory as all one really needs to do is create a class that extends the Controller class and implement a bunch of static methods inside of that class.  These methods will be called by the user through the GUI and return a Result once they finish executing their actions.  Typically, this result is an HTTP status indicating what happened during execution.  For instance, if everything was fine, the function should return a 200 OK status, but it should return a 400 BAD_REQUEST status if the request was malformed.  Fortunately, Play makes it easy to return a variety of different statuses with the functions provided in the Results class.

Once all of the methods have been created, there is only one more step (two if you count testing) before our controllers are good to go!  Obviously, we will need some way of calling these methods and this can be achieved through the routes file (it should be in the conf folder of your Play application) which is used to map various HTTP requests to our functions.  These requests use URIs to handle the queries which allows us to create a RESTful API for our application and take advantage of the built-in HTTP methods.  The basic format of a line in the routes file is (in this order):

  1. The HTTP method that will invoke this call (i.e. GET, PUT, POST, or DELETE).
  2. The relative URI used to invoke the call.  This is where the user would send the request to perform the action.  A nice thing about the routes file is that it uses Scala and can pass parts of the URI to the call.
  3. The actual Java method that will be invoked.  As some of these methods might take parameters, we can use Scala to pass pieces of the URI as the method’s parameters.  Scala even allows for type checking to perform some early validation.

For example, the following snippet maps a GET request that retrieves the details of a Book where the Book’s ISBN is passed as at the end of the URI:

GET     /books/:isbn                controllers.Book.details(isbn: String)

After filling out the routes file, our controllers should be ready for use (and testing).  Sure seems simple!  Nonetheless, things can get a little complicated once we venture beyond the simplest of cases.  Retrieving and deleting data tends to be simple enough, but adding data to our database through a POST request is a completely different story.  When adding data, validation is of utmost importance.  While Play does provide a bunch of annotations that we can apply to the fields in our models for automatic validation, they are typically not sufficient as some key checks are missing.  For instance, there is no Min or Max check for floating point numbers (the ones provided only work for integers) and such checks must be implemented by hand by adding a custom validate method to our model class or building it into the controller.  Yet, one must be careful as POST requests seem to automatically fill in primitive types to 0 (or 0.0) which can make it difficult to detect whether that information was actually provided by the user.  A simple workaround is to check whether the POST request actually has that field, but it effectively renders the provided @Required check, which should cause an error if the user neglects to provide a value for the annotated field, useless in that case.  Even something as simple as ensuring that the item is unique requires a handcrafted check to the database as the @Column(unique = true) annotation only works in the database.  If the user tries to add a duplicate, the program will throw an exception when it tries to add that duplicate to the database instead of creating an error during the validation step.

Another trap occurs when trying to add a new model type that contains complex types (i.e. a reference / foreign key to another model type).   The POST request sends everything as a String-String key-value pair and as a result, it cannot send a whole complex type.  There are a couple of ways around this, but the best one I found was to use a data binder that registers a Formatter telling Play how to handle those complex types.  In my case (as seen in the example below), I just send the complex type’s ID (a Book here) in the POST request and use that ID to retrieve the actual complex object in the data binder.  This requires that the object is already in the database which is acceptable in my TextEx application.

      Formatters.register(models.Book.class, new Formatters.SimpleFormatter<models.Book>() {

        @Override
        public models.Book parse(String isbn, Locale locale) throws ParseException {
          return models.Book.find().where().eq("isbn", isbn).findUnique();
        }

        @Override
        public String print(models.Book book, Locale locale) {
          return book.toString();
        }

      });

What if you want to create multiple linked objects in one POST request?  Once again, there are several ways to do it.  One simple way is to make the link optional (the foreign key / reference field in the model) and add it manually.  As an example, the code snippet from the Warehouse exercise below adds both an Address and a Warehouse from one POST request and manually sets the link between the Warehouse and its Address.  While this might not be the most efficient or elegant way, it works!  Please not the use of an explicit setter.  Play does some things in the background to generate getters and setters, but it seems to cause more problems than it solves.  Consequently, I have defined explicit getters and setters for all of my models.

  public static Result newWarehouse() {
    // Create a Warehouse form and bind the request variables to it.
    Form warehouseForm = Form.form(models.Warehouse.class).bindFromRequest();
    Form addressForm = Form.form(models.Address.class).bindFromRequest();
    StringBuilder errorStringBuilder = new StringBuilder();

    if(addressForm.hasErrors()) {
      errorStringBuilder.append(Helpers.generateErrorString(addressForm));
    }

    // Validate the form values.
    if (warehouseForm.hasErrors()) {
      errorStringBuilder.append(Helpers.generateErrorString(warehouseForm));
    }

    if(errorStringBuilder.length() > 0) {
      return badRequest(errorStringBuilder.toString());
    }

    // form is OK, so make a Warehouse and save it.
    models.Warehouse warehouse = warehouseForm.get();
    models.Address address = addressForm.get();
    address.setWarehouse(warehouse);
    warehouse.save();
    address.save();
    return ok(warehouse.toString());
  }

Conclusions

All in all, controllers in Play are deceivingly simple.  Creating simple controllers is extremely easy, but that ease quickly turns into frustration when one tries to do something more complicated due to the many quirks of the framework.  Nevertheless, I do appreciate Play as not having to write any SQL is still very nice.  Not to mention that the problems I encountered are very good, albeit harsh, lessons and I will not be forgetting their workarounds soon.  Hopefully this posting gave you some insight into Play controllers and maybe it will save you some of the headaches that I had to endure.  Thank you for reading and please join me next time when we start to work on the last component in the MVC architecture, the View.

Putting the M in MVC

BookExchange

Introduction

Now that we know how to make a simple Play application, we can start making something more substantial.  As mentioned in the previous post, Play is built around the Model-View-Controller or MVC architecture which specifies how an application should be split into manageable chunks.  These chunks are neatly segmented to limit interaction between them which allows developers to take them on one at a time.  Today we shall start with the first component, the Model and see how we can implement them in Play.

Modeling in Play

Models in the MVC architecture correspond to the persistent storage layer of the application.  This persistent storage typically comes in the form of a database which needs to be configured so that its table schemas match the needs of our application.  While this can be done using SQL, Play uses the EBean object relational mapper (ORM) to take care of this for us.  All we need to do is make Java objects that extend EBean’s Model class and mark the resulting classes with the @Entity annotation which tells Play that each of these classes should be mapped to a table in our database.  Of course, we want to specify the columns in these tables as well and this can be achieved by simply adding non-static fields to our object  as each non-static field is converted into a column of the table.  Once that is done, Play and EBean can do their magic and set everything up for you.

For further customization, various JPA annotations can be applied to the fields with different results.  For example, the @Id annotation can be used to specify a primary key or the @Required annotation can be applied to ensure that the annotated field is never null.  These annotations can also be used to represent foreign key relationships between entities.  For instance, the @OneToMany annotation above a field signals that this entity has a one-to-many relationship with the type of the field.  Such annotations can have a variety of options as well.  One common option is the cascade option which allows certain operations to trickle through the foreign key links and make changes to both the target table and those related to it.  Another option is mappedBy which specifies which side of the relationship holds the column containing the foreign key.  In a one-to-many relationship, the many side typically holds the column, but it is effectively up to the developer in one-to-one or many-to-many relationships.  Using these annotations, we can tweak our tables to our liking using plain Java code with no SQL to break us out of our workflow.

Once the tables have been set up, querying and modifying them is a rather straightforward process.  Since each entity class extends Model, they come with a bunch of built-in functions that can be used to save, delete, and update rows in the table.  For example, adding a row to a table is as simple as creating an instance of an entity class and invoking its save method.  Retrieving rows from a table is also fairly simple as all you need to do is create a Finder for the class.  The Finder can then be used to perform a range of queries using plain Java or SQL.  However, deleting elements seems to be a bit more problematic.  According to the API, simply calling the delete method on an instance should delete it from the database, but  delete seems to have difficulty dealing with foreign keys.  If you try to delete the owner of the foreign key and have cascading enabled, Play seems to have a hard time removing the row as it is unable to do the cascaded delete on the row in the foreign key’s table.  I have tried setting the foreign key in the owning row to null in an attempt to prevent the cascading, but that seems to cause the delete operation to delete the entire tables.  I am currently looking into how to resolve this issue and I am considering abandoning cascades on delete due to this issue.

My Models

Warehouse

As we can see, models in Play are rather nice (when they work that is), but what can we do with them.  To practice my modeling skills, I am currently working on two Play applications.  The first is essentially a tutorial as we are stepping through it in my ICS 613 class.  This application models a warehouse that stores quantities of different items.  There are 5 interrelated models in this application.  The models and their relationships can be seen in Crow’s Foot notation in the diagram above.  Furthermore, if you wish to see the code for the models please visit the warehouse project’s GitHub page.

BookExchange

The second application is a textbook exchange application, called TextEx, that will allow students to sell and buy used textbooks directly to and from each other.  The model here was a bit more difficult to ascertain as the problem is more loosely defined.  In fact, I initially modeled the relationships between the Books / Students and the Requests / Offers incorrectly.  I originally had them as one-to-many relationships where there was always at least one Request / Offer on the many side.  However, that is not always the case as it is possible that nobody wants to buy or sell a particular book.  Thankfully, I noticed this during a code review and was able to make the change before continuing with development!  For more information on the TextEx project, please visit its GitHub page.

Conclusions

Overall, the modeling capabilities of Play are quite convenient.  The ability to define tables in pure Java is very nice and adding rows works without a hitch.  Deleting rows is more problematic however and I will need to do some more tinkering before I can integrate deletes into my applications.  Nonetheless, I was able to successfully create two sets of models using EBean and lightly test them by adding and retrieving rows from the tables.  Hopefully, I can find a solution to the delete issue soon as that might be an important operation as development continues and I set my sights on the controller or the C in MVC.

Time to Play!

The Internet is an integral part of the lives of many people and as such, the importance of developing and maintaining the services that these people have come to love cannot be understated.  Sure, mobile applications have been surging in popularity over the past several years, but the web is not going away any time soon!  In fact, responsive web applications can be used by both mobile and desktop users to tap into both user bases.  Consequently, there are tons of technologies that can be used for web application development in many different programming languages.  I have dabbled in web application development before (see WoWStats), but here we shall take a look at a real web framework (the Play framework) to truly see what web application development is all about.

ToDoList

The Play framework is a Scala / Java-based web framework that supposedly has a Ruby on Rails feel to it.  I personally have not used Ruby on Rails, but Play seems neat even though I have only used it to create a very simple ToDoList application (shown in the screenshot above) that adds and deletes text strings from a database.  Play streamlines the entire development process as it essentially comes with its own project management and build tools as you can create, build, test, and run your project using Play’s command line tool.  It even comes with a built-in server and database to makes setting up a new project a 30 second (or less) affair.  The developers of Play were also thoughtful enough to provide full Eclipse or IntelliJ project support to make importing Play projects a breeze.  Play is built around the MVC application architecture to prevent your projects from becoming a complete mess and uses a variety of technologies to keep the code simple.  I especially like how Play uses Ebean to handle the database work with purely Java code.  No more SQL to break my Java workflow!  The best part is that it worked right out of the box on my Windows 7 machine.  All I had to do was download and run the installer, add the install directory to my path, and it was good to go.  No extra configuration needed!

All in all, I am quite pleased with my first taste of the Play framework.  It comes with a ton of useful features right out of the box and the power afforded by this framework makes me wonder why I did not try it earlier.  My previous attempts at web development were based on the AMP stack (Apache HTTP Server, MySQL database, and PHP) without a framework and it was a rather painful experience (mostly in terms of getting everything configured properly).  The Play framework removes all of these configuration problems and lets me focus on the actual problem at hand.  Not to mention that Play uses my favorite programming language, Java, to ensure that everything is strongly typed.  I found it to be quite easy to mess up my variable types if I was not paying attention in the weakly typed PHP language which resulted in more bugs and headaches than necessary.  Now Play and my Java compiler will complain when I try to something too crazy and hopefully that will lead to increased productivity.  As a result, I am very excited to get started with Play!  If you also want to join in on the fun, please visit the Play website and check out the documentation which also includes the ToDoList application.  Have fun!

What is Open Source?

Introduction

osi_standard_logoWhen I hear of open source software, several different ideas come to mind.  Of course, open source means that the source code should be publicly available, be it on a dedicated company / project website or a repository dedicated to open source software like GitHub or SourceForge.  Another notion that is closely associated with open source software is free as in you do not have to pay for it.  However, that is not necessarily the case as open source can be sold commercially.   So what exactly does open source software entail?  Let’s find out from the Open Source Initiative!

What is Open Source? 

As the name implies, open source software must make its source code publicly available in a non-obfuscated form.  In addition, there are several provisions that prevent discrimination in the software’s license and protect those that derive from the original open source work.  Interestingly, it does not prevent the sale of said software and it is perfectly fine to sell open source software.  The only stipulation is that one cannot charge royalties or fees when another party uses or redistributes the software.  Hence, open source software is not always free in the monetary sense even though it does require a bit more than simply revealing the source code.

Free Software

What about free software?  Free must mean that there is no monetary cost right?  Once again, that is not quite correct as free software can indeed be sold.  Here, free means freedom, not price, as free software must respect the user’s four freedoms (as stated in the definition of free software):

  1. The freedom to run the program, for any purpose.
  2. The freedom to study how the program works and change it so it does your computing as you wish.  Access to the source code is a precondition for this.
  3. The freedom to redistribute copies so you can help your neighbor.
  4. The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

Sounds pretty similar to the definition of open source software right?  Almost!  While both the open source and free software movements seem to be aiming in the same direction there are some subtle differences.  One difference is that the definition of free software is more demanding in terms of the rights given to the users as open source software can allow some licenses to be restrictive to the point that they are no longer “free” (in the free software sense).  In addition, open source is focused more upon licensing instead of the actual ability to install and run modified executables.  Consequently, free and open source are not necessarily synonymous despite their many similarities.

My Thoughts

All of this open source and free stuff sounds great, but how does this affect us?  For one, many of us make us of open source software in our daily lives.  I cannot go a single day without opening my open source Eclipse IDE to do Java, C / C++, or Python development.  In addition, many people use Mozilla Firefox which is an open source web browser (it is my browser of choice when running Ubuntu).  Secondly, following the open source or free software philosophies can help you get your ideas out into the larger world.  Truly interesting projects can attract the attention of developers from around the globe who can help to make your idea into a successful software application.  Making that project open source allows those interested developers to lend their expertise to the project and accomplish things that you would be unable to do by yourself.  Furthermore, making your code open source is a great way for future employers to see what you can do and being able to show your recruiter / interviewer a few awesome open source projects would surely be in your benefit!

Now that we know what open source software is and why it is great, how does one get involved?  Fortunately, there are many online repositories, such as GitHub, SourceForge, and Google Project Hosting, where a developer can host open source projects at no cost.  All you really need to do is make a project there and push your code to the repository for everyone to see.  Nevertheless, make sure that you have a license for your project.  Licenses are important as unlicensed code cannot be used by others!  There are many kinds of licenses with many different restrictions.  I personally prefer the MIT License which lets users do whatever they want with the code because I do not want to prevent people from using or modifying my code.  In fact, I would be honored if they did so!  I hope this was a good brief introduction to open source and free software.  Now let’s get out there and start collaborating!

Meet Orthogonobot!

Orthogonobot-Dodge

Overview

The judgment day is almost here, for my Robocode robot at least!  The Robocode tournament will commence tomorrow and I would like to introduce you all to my entry, the Orthogonobot!  Orthogonobot is my latest attempt at creating an unstoppable virtual-robot-killing machine and was my best try to date.  Here I shall discuss what Orthogonobot does along with what did and did not work.

Design

The strategy behind the Orthogonobot can be separated into three main components.  The first is movement.  If the robot cannot move well, it will be easy pickings for enemy robots and be sure to fail.  Nevertheless, the movement strategy for Orthogonobot is rather simple: keep the body perpendicular to the enemy and move only when the enemy fires at you to dodge the bullet.  Not only is it an effective defensive strategy, it also limit the likelihood that the Orthogonobot will get too close to the edge and possibly loose energy due to a collision with the wall.  Orthogonobot also uses a custom event to detect when it gets within a certain distance of a wall and moves away from that wall to further mitigate the chance of a collision.  Finally, the Orthogonobot can go on the offensive if the enemy is low on energy by following it.  This will close the distance between the two robots and improve the accuracy of Orthogonobot’s shots to hasten its own victory.

The targeting strategy is also straightforward:  scan for an enemy and target the closest one.  Furthermore, the Orthogonobot limits its radar scanning arc to 180 degrees once an enemy has been detected to obtain higher resolution information about the targeted enemy’s current state.  Once the target has been acquired, the gun is rotated towards its location.  Unfortunately, there is no target leading in the current version of the Orthogonobot.  I did attempt to implement some target leading code, but I was unable to debug it in time for the competition.  This strategy appears to be sufficient against simple robots though.

Finally, there is the firing strategy.  Once again, the firing strategy is straightforward as the Orthogonobot fires bullets of different strengths based on its current distance from the target.  Farther targets lead to weaker, but cheaper shots as the distance increases the likelihood of a miss.  Conversely, closer targets warrant stronger, yet more expensive bullets to increase the amount of damage inflicted.  The firing is also limited by the movement of the turret.  It does not make sense to fire the gun if it is not pointing at the enemy so the Orthogonobot makes sure that the turret is done turning before firing.

Results

While simpler than my original goal (I wanted to add target leading), the current Orthogonobot does quite well against the sample robots provided with the base Robocode system as it can beat them all more than 90% of the time.  Most of these battles are won by conventional means through a mix of offense (shooting the enemy with bullets) and defense (dodging enemy bullets) and proves that even this simple design can work.  However, Orthogonobot’s victories against the Walls robot (a robot that navigates along the walls of the battlefield) are purely defensive as it is unable to hit Walls with its bullets.  Nevertheless, it does win by successfully dodges bullets until Walls runs out of power.  This is the case that convinced me to  give target leading a try as the constant movement of Walls makes it difficult to hit.  A win is still a win though, no matter how ugly it is, and I will take Orthogonobot’s 90% win ratio regardless of how it won!

Testing

Of course, this system had to be tested and I implemented three classes of JUnit test cases to do so.  The first class of tests is acceptance tests which check that the Orthogonobot can beat the sample robots.  Currently, the acceptable win ratio is at least 90% which it passes with flying colors as mentioned in the previous section.  In addition, I have a couple of behavioral tests that double check that the robot does what I expect it to.  These tests currently check the robot’s firing (ensures that the robot only fires bullets of the specified strengths) and movement (makes sure that the robot does not run into walls).  Finally, there are unit tests to check the helper functions and classes.  Unfortunately, most of these classes are not used by the current version of Orthogonobot as they were implemented to facilitate the target leading system, but they were tested and are included in the main distribution since their correctness has already been checked.

Lessons Learned / Conclusion

Overall, making Orthogonobot was a fun and edifying project.  On one hand, it was nice to make a robot that can beat the sample robots most of the time.  My previous attempt was a miserable failure in that respect and it was satisfying to finally accomplish my goal.  It was also fun to play with the various software engineering tools used in this project such as Maven, Git, and GitHub’s project hosting.  On the other hand, Orthogonobot ended up being a good example of why one should aim to create a working system before adding more extensions and gives merit to spiral software development approaches.  Fortunately, I developed a stable, albeit simpler, version of Orthogonobot before tackling the target leading issue as I can at least use that simpler version in the competition.  If I had focused on the target leading system before ensuring that the simpler version was stable, I would be in a lot of trouble now!  In addition, this project also made the usefulness of Git branches very obvious as I could extend my stable system without worrying about whether my changes would somehow introduce problems in the stable version.  By separating all work out into branches, any bugs or problems introduced by my additions are limited to that branch and the stable master branch remains untouched.  As a result, making Orthogonobot was not only a fun and thought-evoking endeavor, but also a satisfying learning experience as I continue to practice my software engineering skills.