Skip to Content

Blog Archives

Rails basic API authentication with Sorcery and JWT

APIs have become more and more popular due to increasing demand for development of mobile and single page apps, and so the need of sessionless server-side applications, being those that are token-based the most popular ones now days due to their relative easy implementation. Here you have a great example that uses JWT to secure a microservice-based application.

JWT (or jot) stands for JSON Web Token and as it states it is a web token structured in a JSON format. JWTs are composed of 3 parts: the header, the payload, and the signature. In short, the header contains the type of token and the algorithm for the cryptographic signing; the payload (or claim) contains the information wanted to securely send to the server, and the signature which contains the secret key known only by the server and that is used to decrypt the data contained in the payload. It is not the primary objective of this post to explain in detail what JWT is, so if you want to know more about it you can do it here.

JWTs are used in the next way:

  1. The API server generates the JWT.
  2. The JWT is sent to the client.
  3. The client saves the JWT (i.e. in LocalStorage).
  4. The client sends the JWT back –we’ll send it on a request’s header– to the server.
  5. The server uses the JWT’s payload data for X or Y purpose. In our case to identify the user that is performing the request.

 

Besides JWT, we need an authentication strategy to actually identify users. We’ll use Sorcery for this purpose; I recommend you to go ahead and read the documentation if you haven’t done so. You can implement any other authentication library –or build your own– but for this particular case I personally like Sorcery because it offers all basic authentication features and it is not as overkilling as other libraries.

This is how the general authentication flow looks like:

jwt-diagram

Fig. 1 JWT Flow

Conveniently there is a jwt library that we can use to generate user’s token, so go ahead, read its documentation and install it in your project.

Next, we’ll create a service to provide the JWT, another one to validate it, and a last one to authenticate the user.

app/services/jwt/token_provider.rb

app/services/jwt/token_decriptor.rb

app/services/jwt/user_authenticator.rb

NOTE: In this case we are assuming that JWT is in ‘Authorization’ header with format ‘Bearer xxxx.yyyy.zzzz’, that’s why we are splitting @request_headers['Authorization'].

Implementing our services

We need to generate the JWT after user has successfully logged in:

Or signed up:

The token returned in the response should be properly saved in the client so it is sent back to the server in subsequent requests’ headers.

Then we need to authenticate user on future requests. Adding an authentication method on ApplicationController will let us use it on ApplicationController’s children controllers to protect our private endpoints:

And that’s it, our API is now secured using a sessionless approach. From this point on you can add more complexity using jwt and Sorcery‘s methods to, for instance, make token expire after 1 hour or reset user’s password.

Let me know in the comments section below what you think about this strategy.

0 4 Continue Reading →

Dev jobs: A microservices architecture journey

Recently at Tangosource I was assigned to work on an internal project named Dev Jobs (see dev jobs on iOS and dev jobs on Android), where I had the chance to work next to great teammates and implement a microservice architecture with Ruby on Rails, nodeJS, JWT, Ionic, Docker, and several testing tools, including RSpec, Mocha, Frisby, and Protractor. In this first post, I’ll explain the basics of the microservice architecture.

What is a Microservice Architecture ?

Martin Fowler gives a great definition.

[1]“The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.”

The Problem

Let’s have some context. We are building a server side application that must support different client side platforms such as:

  • Desktop browsers
  • Mobile browsers
  • Native mobile applications

 

Also, we need to consume an external API service. In our case, Linkedin through Oauth2 for authentication. And the ability for users to chat between each other.

On an architectural matter we still had to decide between monolithic or microservice. So let’s dig a little more on this two.

Monolithic vs MicroServices

Monolithic

If we take this approach, we need to keep the following in mind:

  • Build one application to rule them all.
  • Use one language ( Ruby, PHP, JavaScript, etc. ) we could not take any extra advantages from other programming languages.
  • The architecture needs to be flexible enough to support API through AJAX and render HTML.
  • The development team needs to feel comfortable with the language we picked.
  • Security. We will have a public API to serve mobiles/clients.
  • Best tools for RTCP. So we can have a nice looking chat
  • Testing. build testable code

 

I have seen great monolithic architectures that can support all of this. One of my favorites is RoR engines based or nodeJS services based applications.

MicroServices

In the other hand, if we go for MicroServices we must take in mind the following:

  • Delegate responsibilities in small applications.
  • We can use different languages per service. Which is the best choice per responsibility?
  • How are we going to communicate each service ? Normally with tcp/ssl.
  • How are we going to persist Data through services?
  • Security: Protect the services.
  • Testing: Test all microservices.

 

On a personal note, as a developer working on a microservice platform, I can focus only on the technical details wrapping a specific service. This does not mean I should not understand the whole set of services as a unit.

“Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.”

— Melvyn Conway, 1967

Having this in mind, we had to start by making technical decisions, such as how to protect our API, how to manage RTCP, by nature HTTP is stateless which means is sessionless so for our case we had to maintain a session through tokens ( JWT ), at first we though maybe nodeJS for RTCP since nodeJS is strong for DIRT ( Data Intensive Real Time )  apps. But we had not only had to take care of the tech devs but also for dev resources, currently at TangoSource we have strong Ruby on Rails devs so we had to work all together and take advantage of the expertise of each team member, our best bet was to use RoR for business logic, and take advantage of nodeJS. So the combination of RoR and nodeJS makes the solution solid and strong.

Solution

For this particular project, we decided to go for the MicroService and have some fun! The first thing that we needed to figure out was defining the responsibilities per service. So we end up with 4 microservices.

  • Ionic
    • More than a service, it’s the client side of the application. Ionic by itself is a front end SDK for hybrid mobile apps using AngularJS.
  • IDP
    • Responsible for (a) providing identifiers for users looking to interact with a system, (b) asserting to such a system that such an identifier presented by a user is known to the provider (c) possibly providing other information about the user that is known to the provider.
    • This may be achieved via an authentication module, which verifies a security token, and that can be accepted as an alternative to repeatedly and explicitly authenticating a user within a security area.
  • SP
    • In charge of persisting all user’s data, including interaction with the system. This service will contain all the business logic
  • Chat
    • In charge of establishing real-time communication among users within the system.

 

We decided to protect the services with JWT

Architecture

devjobsarchitecture

Each square represents a service, and the lines are how they communicate between each other.

Ionic : This might be the most critical service of them all, but not the most important. We have 3 main components:

  • Ionic server
    • An HTTP server in charge of sending all the assets ( HTML, CSS, JS ) to the next component.
  • Ionic Cordova
    • A Mobile SDK, which provides the user’s interaction with the app, and consumes the rest of the microservices as resources. This component runs on top of 3 different type of devices.
      • Android
      • Ios
      • Browser
  • Ionic App
    • Ionic admin panel, this panel allow us to configure GCM and APNS exposing a protected API.

 

IDP : This service is in charge of giving identity to a component/user. This service will expire a valid token with a component/user’s basic information on the JWT’s payload. Similar to memoization, this service will keep track of sessions using Redis on a basic TLS connection. Thanks to the RFC 7519 standard, this service can scale in other microservice based architectures and be reused.

SP : This is an API/Https service protected with JWT. This micro service is in charge of persisting all user’s interaction/information using MySQL with a standard TLS connection. It also manages the business logic.  

Chat : This microservice will provide real-time communication between users. This RTCP (real time control protocol) is being protected with JWT on the handshake process, and persists the communication activity on MongoDB, a NoSQL database.

Conclusion

Taking these kind of decisions is not simple. It requires a lot of experience in order to compare libraries/languages, you must at least know how they work, and it’s even better if you have used them. Remember that you will not build this by yourself, you are part of a team. It’s smart to sit with your team and discuss each individual’s strengths and weaknesses in order to make the best tech decisions.

DO NOT mix responsibilities of the micro services, instead identify your services. Add all the needed documentation to help the team such as /*Comments */,  READMEs, and Wikis.
A microservice architecture is not always the best approach, and could be painful if you are not careful.

4 8 Continue Reading →

Creating a continuous integration server in a hackathon, and how to make SSH connections using ruby

Rails Rumble wrapped up recently and with that, the end of an exciting 48 hour experience where developers can challenge themselves, prove all their programming skills and create an awesome product from an idea.

This is the third hackathon I’ve participated in since I began working as a professional programmer, and I wanted push myself and create my own version of a continuous integration server.

While creating a continuous integration server we went through a lot of challenges, from user experience design to interesting architectural decisions, but one of the most challenging and interesting situations I went through was running the build jobs from a repository in a virtualized environment. At the very beginning we realized that the safest way to run the tests for a given repository was to isolate the test environment from the web server environment due to because of security risks, so we decided to setup the architecture in the following way:

 

CI2

 

 

 

The idea was to connect to a remote (virtualized) server over the SSH protocol and run the script with a provisioned environment (ruby, rvm, rubygems, postgresql, sqlite, mysql, etc.). We spent some time researching how to connect via SSH using ruby and found a library called Net::SSH which allows you to create SSH connections easily and execute a command. We did some tests and it worked but unfortunately it was very hard to navigate through folders and request a bash environment just like a normal SSH connection from the UNIX terminal, so after a long researching, testing, and reverse engineering many open source projects that use Net::SSH we decided to create abstraction layers for each of its components (use-cases).

 

CI3

 

 

By giving single responsibility to each of the classes we were able to easily build the programming interfaces on top of the CI module (see SOLID).

The simplest case scenario, you can connect to a server just by instancing the objects from the top level class of the CI module as following:

Pretty easy, right? Let’s take a look inside the module:

This class is only responsible for setting up the connection parameters that Ci::SSH will handle as a connection string, so we have encapsulated the Ci::SSH work in a lower level of the Ci namespace. You can actually use it outside the Ci::Environment class but you have to customize it as seen in thedef initialize method above. Now let’s take a look at how the Ci::SSH works.

By defining these two classes the usage of Net::SSH becomes pretty straight forward:

By the end of the hackathon, all this made communication possible between the continuous integration environment and the UI. We connected the shell output to a websocket using the pusherservice, so we could push notifications from the server in real time to the user; you can see it live by visiting the actual project from RailsRumble Simple CI.

Questions?

Let me know via comments on this post or via email antonio@tangosource.com.

0 0 Continue Reading →

What? Rails can’t hear you? Use Flash!

First I’d like to mention that this post is not for beginners, that means that I’m assuming that you know about programing with Ruby on Rails (for this post I’m using Rails 3.1) and some of Javascript. If you don’t know Rails or JS at all, you may need to do some research about the terms and instructions I say here. With that said, lets start!

Have you ever needed to record and save user’s audio by using computer’s microphone using Rails? If so, I bet you realized that there is almost no information about how to do this using Rails.

The Story

I was working in a project where the customer needed to implement a feature to allow users to record his voice by using his computer’s microphone. I thought “this can’t be that hard, there is HTML5 now,” but I had no idea what I was talking about.

Researching: HTML 5 or Flash?

I started researching about HTML5 and it didn’t take me that much to see that HTML5 is an “still in progress work“ when it comes to record live audio. The browser that currently has the best implementation for HTML5 audio tag is Chrome, and yet it needs the user to configure some of its flags for this tag to work properly, so I discarded HTML5 as an option.

I continued researching and soon I started reading in many posts that the best approach to accomplish what I was trying to do was by using Flash in the client side. I know, I don’t like Flash either, but lets face it, most browsers support Adobe plugins and this is better than having the application working only in Chrome, so I decided to give it a shot, and took the Flash & Rails path.

Once I decided to go with Flash, the next impediment I had was that I’m not a Flash programmer, but that was solved when I bumped into the jRecorder plugin (I want to thank to Sajith Amma because he certainly saved me hours learning Flash). jRecorder is a plugin that uses a Flash object to get access to user’s microphone and records the microphone input. This plugin has also an API built in javascript to initialize and interact with the Flash object.

Ok, enough background, let’s jump into the interesting part:

Coding

Download the jRecorder plugin, and unzip the file. You’ll see 7 files; the ones we need are: jRecorder.js, jRecorder.swf, and jquery.min.js

In your Rails application:

Explaining the code

This is the DOM’s object where the flash object will be inserted.

The js code is actually the constructor of the js object that will interact with the Flash object (we can say it is a sort of an API). You can read the documentation of the plugin here, however there are 2 parts that I would like to explain since the example that our good friend Sajith implements is in PHP and you may get confused:

In ‘host’ we are defining the route where we want to receive the recorded audio, in this case, for my application <%= audio_path %> points to a controller named audio in the action upload_file that we’ll use to save the audio file. You can create a different route, just make sure it uses the POST method because we are sending information to the server, and filename=audio.wav is a parameter that contains ONLY the name (that means NOT the recorded audio). The result of “<%= audio_path %>?filename=audio.wav” would be something like ‘/audios/upload_file?filename=audio.wav’.

The other interesting option that you need to translate in the example from PHP to Rails is ‘swf_path’. This option is where you set the path of where jRecorder.swf object is located, and all you have to do is to use Rails helper ‘asset_path’ and pass to it the name of the flash object.

At this point we are done with the client side.

Jumping into the backend

Go to the controller that responds to the route that you specified in ‘host’ parameter in the js code, in my case it’s the action upload_file in audios_controller.rb. If you already read the jRecorder documentation that means that you’ve seen this:

(PHP code)

This code is very self-explanatory and you can infer that all this code is doing is receiving the recorded audio, opening a file, writing the recorded audio, and closing the file. Now, what is the equivalent code in Rails? Here it is:

The interesting line here is ‘request.raw_post’. That line contains the actual audio recorded by the user that was sent by our view using the plugin. That line is the equivalent of ‘file_get_contents(‘php://input’)’. Also check that we are opening the file with ‘b’, which is very important because the file we are reading/writing is a binary file (otherwise you’ll face encoding issues).

That’s it! You should now have the file stored in the root path of your application!

Improve it

Depending on your application, there are many things that you can improve, like converting the file from .wav to .mp3 or .aac formats using Zencoder, uploading the file to S3 buckets, implement background jobs to save the audio, etc.

Got questions?

Let me know in the comment section below.

0 0 Continue Reading →

Coding for a cause: Benefits and Pitfalls

Does free work sound like a bottomless pit of time expenditure, with no chance of compensation? Have you ever wondered how to do pro bono work in a way that not only improves the world but helps out your bottom line? Our team just released some pro bono work for Vittana, which I think was time well spent. Below you can find out how it came about, why it was worthwhile, and read some lessons learned to improve the odds of success for your next pro bono project.

Discovering the project

When I ran into my friend Kenji at a mixer a couple weeks ago, it seemed like a usual incidence of our paths crossing, given that in one recent week we ran into each other 4 times at various tech events. Little did I know that the project I’m posting about now would be revealed. Kenji mentioned trying to raise awareness for Vittana as a means of making a difference, via a blogger challenge. For those not in the know, Vittana provides microloans to students in the developing world, and donors can choose how funds are distributed, as well as distribute funds each time students pay back their loans.

Over drinks, Kenji and I bounced around a bunch of ideas and eventually we came together with a plan to create a blogger challenge with a leaderboard sorted by tweets and dollars lent through the challenge. It took a bit of back and forth, but eventually we were ready to rock.

Working with Another Team: How we did it.

In order to keep the project from spiraling out of control we:

Defined Scope:I’ve talked with people who have done pro bono work, and one of the worst outcomes is ever expanding project scope, with no end in sight. To prevent this, we established some ground rules to keep it from getting out of hand. By the time that the contest started, we agreed that we would hand over the code, and give maintenance responsibilities to the Vittana team.

Established the Concept:Our first task was to establish the concept before any serious coding began. We wanted to create some background tasks that made Twitter API calls to search for number of retweets associated with a given blog post. To narrow the search down, we suggested using hashtags. A possible side bonus is that we could create a trending topic and create some buzz.

Minimized Reliance on the Vittana Team:To ensure we didn’t have to take a deep dive into their code base, the Vittana team gave us JSON feeds containing all of the necessary fields (including donation totals) for each of the bloggers. Given the nature of the work, we were able to have a quick pile on of a few developers, giving them a change of pace for a couple days, and then allowing them to move on cleanly.

Why did we do this?

First off, I’m always happy to make my friends look good and improve the chances of a successful project. Given that the Vittana team was already slammed, I saw a great way to help out a friend. I was also able to do some good for a cause that I care about personally and professionally. TangoSource trains developers in our locale to increase their odds of professional success after college.

We also have a few developers that we’ve been training and were just about ready for production quality code. I believe that real work is more interesting and effective in training junior developers. This seemed like a good opportunity to expose some guys to a real world project.

Finally, we were presented with a chance to expand our business. Despite being relatively young, Vittana has a strong brand with a lot of awareness, which was a chance for great exposure. The possible value for the TangoSource brand seemed significant. Here’s a screenshot of the link we were given on the contest page:

Cause-Img

I also found out recently that the Vittana team is interested in discussing TangoSource doing paid work. This should not only provide us a means of growing the business, but also help us do good at the same time.

Lessons learned

1. Don’t rely on others to test your code.To a certain extent, we assumed that the other team would test our code on production. We ended up having to do some last minute patching that would have been more efficient if encountered earlier in the development cycle. Initial testing also exposed some places where the interaction design was broken on the admin side that would make further testing difficult, and we had to code out a much more involved administration panel. This was an affirmation of test early, test often, with a realistic scenario.
2. Remember to nail down the environment before writing a line of code.In this case, we assumed that Vittana was using the same database as what we were testing for in our local environment. That mistake was an fast way to waste a couple hours. Also, unit test your code regardless of the scope of the project. We wasted a fair amount of time on regression errors.
3. Whenever possible, try to limit scope in a way that’s win-win.It makes sense for the Vittana team to take ownership of our code. The more complex our product, the more work needed on their side. Also, since this was a new type of initiative, we were able to execute a lean exercise for them without too much investment for either party.
4. Networking is valuable but often in an intangible way.Deeper connections are more useful, but take work to find and cultivate. For me, this means meeting plenty of people to find those that I connect with. Afterwards, there’s the enjoyable investment of staying in touch, which helps create opportunities.
5. Genuine altruism is a great business strategy.You never know what might result from doing good. After starting work, I ran into the some of the Vittana leadership, who mentioned they might need some development help on future projects. This was confirmed in later communication, after they performed some code review.

Final results

With 90 hours of work total, I feel this was a worthwhile way to spend our time, even not including the chances to grow TangoSource’s business. From a purely utilitarian standpoint, this effort will probably pay out regardless. Subscribe to our blog to find out!

Stay tuned

During the course of the blogger challenge, check out Vittana Twitter and the leader board we helped create:
http://www.vittana.org/make-a-difference
Be sure to spread the word to help fund students around the world.

Also, we’re planning on releasing a Rails gem that does the majority of the Twitter heavy lifting, so keep an eye on the TangoSource blog if you want help automating the tracking of tweets.

0 0 Continue Reading →

Finding quality outsourcers, the story of TangoSource

Does your outsourcing experience resemble the image above? A ton of strangers, randomly trying to find work? And after finally choosing someone from the crowd, having to deal with a lack of accountability, poor work product, and lack of passion. I know, I’ve been there, and I’ve learned many lessons. Hopefully my story will help you save time and anguish.

How is all started:

TangoSource’s 1st product was, drumroll, a social network for tango dancers! After the bliss of the first couple of months, I realized that our market wasn’t big enough, so I included ALL social dancers and created DanceHop, an events 2.0 website. I’ve outsourced the design to Yilei Wang and Ruby on Rails development to a local developer, who I’ll call Ahab.

Ahab repeatedly apologized that it was taking him so long, which should have been a red flag. Fast forward, he quit after 3 months of work and gracefully declined equity. This forced me to look at Working with Rails to find a replacement. I settled on an Argentinian developer, Rodrigo. He started coding, and Ahab said I’d found “a diamond in the rough”. Now being smart, I’d engaged a local Seattle Rails guru a couple hours each week for code review to ensure I’d only keep developers that were improving.

A month in with Rodrigo, it became clear he could only code part-time, so I was back looking for help. This resulted in a string of devs, including an Indian developer, Dibya, who delivered but was a bit expensive; a cheaper Indian developer who didn’t deliver; and two more part-time Ukranian developers Dmitry and Igor who eventually also dropped out.

Finding a true partner:

A month after Rodrigo dropped off, I found my co-founder, Federico Ramallo, on Working With Rails. After a long and enjoyable interview with Federico on Skype, I asked if he was open to taking equity to offset some of his compensation. I’d already seen that his life values were compatible with mine, and our personalities jelled together well. He was open to equity, and after a month trial, with my Seattle Rails guru overseeing the process, we started a professional relationship that turned 2 years old as of September 2011.

Federico eventually moved to Colima, and we set up an office/co-working space there, which has been fantastic for recruiting. At that point, any thought of recruiting outside of our realm of influence was gone. We’ve trained up some great devs, found others, and the investment has been paying dividends in terms of output, and company culture. To wrap up, I’ve included some guidelines that I learned through my hiring experience, which I may go into detail in future posts.

Quick tips for finding great outsourced devs:

  1. Discuss values in the interview to see if you’re on compatible paths
  2. Offer a trial period of 1 month, with increasingly large test tasks
  3. Clearly define what meeting your expectations means
  4. Have an outside observer if you aren’t an expert in the field
  5. Run a daily SCRUM standup meeting, for clear lines of communication and personal connection with your team. Have accomplishments since the last meeting submitted in text before the voice meeting.
  6. Give a bit of a benefit of the doubt, but more than a couple days of poor performance or communication and it’s probably time to move on to the next candidate.
0 0 Continue Reading →