Skip to Content

Category Archives: Featured

Vim: A sense of project

Vim (or Neovim) is an excellent text editor, it is lightweight, ubiquitous and customizable in its core, but those are secondary traits, what makes Vim shine are its fast-writing capabilities with powerful keyboard-centered composable commands.

Despite its benefits, there is a list of things which Vim can’t do like:

  • Advanced file management.
  • GIT integration.
  • Advanced code completion.
  • Code linting.
  • Pairing sessions.
  • Global find and replace.

All of those are indispensable tools for everyday development. Are Vim users condemned to live without them? Not at all, and I’ll show you how you can create a better Vim set up so you can access the tools that you value the most and without losing the power of Vim.

Disclaimer: This post is not a total-beginners guide to Vim. If you have previously configured Vim, then you are good to go.

 

Vim’s built-in customizability allows the community to create plugins for it, and VimAwesome is the Hub for almost all public plugins available, check it out, as its name suggests, it is awesome (you can navigate with the Vim keys).

Adding plugins to Vim is straightforward:

  1. Download it.
  2. Add it on your config file (.vimrc or init.vim) by sourcing it.
  3. Source again your config file or close/open Vim.
  4. Enjoy!

Even though it is not hard to do it, it is tedious, so I recommend you to use a Plugin manager like Plug or Vundle. I recommend Plug because it is the one I have used.

 

File management

netrw

Vim comes out of the box with a file explorer named netrw, in order to use it, you can enter :Exp or :Explorer on normal mode. It will show a list of files in the current directory (you can navigate them with the usual keys), if you open a file and want to get back to the file explorer, you can use :Rex in normal mode.

 

netrw_open_file

 

It seems that you can’t create new files or directories from it. That’s because it is such an obscure tool to use. Use :help Explore or press F1 on the explore screen to see the full list of commands.

For example, to create a new file in the current directory, use the % key; this will prompt you to name a new file.

 

netrw_open_file

 

You can sort the explorer entries, open them on split or on another tab, hide dot-files, show/hide files with custom filters, create bookmarks, execute arbitrary commands to marked files, and a few other exciting commands.

NERDTree

If you happen to love the way trees present your project’s files, you may be better with the NERDTree plugin, a powerful tool for file management.
NERDTree allows a lot of plugins to enhance its capabilities; I’ll show you a few of them later on.

 

nerd tree new file

 

You can create a shortcut to fast-toggle the NERDTree, in your config file:

Also, you can see the plugin documentation pressing ? when NERDTree is open.

FZF

And what about searching for files by their name or contents? Here comes FZF, it allows you to use the FZF tool for fuzzy file searching, and on top of that, it allows you to use any other searcher like ‘git grep’, ripgrep or the silversearcher (I use the ripgrep tool).

 

fzf_ctrl_p

 

You can create a shortcut the same way as on the well-known ctrlp finder, for doing so, you need to map the FZF plugin to the ctrl-p shortcut (and also configure FZF to use ripgrep as its default command):

You can use the Rg tool to search for file contents.

 

fzf ctrl s

 

You can map that command to ctrl-s:

Project-wide find and replace

Project-wide find and replace might be the hardest thing to achieve in Vim without feeling that you did something that you shouldn’t have. There are a lot of ways of doing this. I’ll present to you the easiest one (though, not the best).

By default, Vim comes with :grep and :vimgrep, they both allow you to filter files by their content and save that list in the quickfist list. Once screened, you can make replacements only in those files by using the :cfdo command, for example, to find and replace the function named read_file:

  1. First, you configure Vim’s hidden feature :set hidden to allow you to edit files without errors for not having saved them.
  2. This step is not needed, but makes the process faster by only processing the required files (select only ruby files with read_file in them) :grep -r read_file *.rb
  3. And finally replacing them with :cfdo %s/read_file/read/gc | update

 

vim find_and_replace

 

This method uses the Unix’s grep tool and won’t omit files that you may no want to use like node_modules or ignored files by your source control system; I’ll leave a couple of interesting posts at the end of this one so you can read more about this find-and-replace stuff.

Project workspace

You close Vim and call it a day, next morning, when you open the editor you probably have forgotten which files you were working on and how you arranged them. Well, you don’t have to go through that hassle anymore, Vim has sessions that allow you to save your workplace as you left them the last time.

In normal mode, enter :mksession to create a new session file in your working directory (by default called Session.vim), if you want to open Vim with that specific session, open Vim with the -S flag and you are good to go.

 

Vim session

 

The downside of using sessions is that you have to override them every time you have to change your workspace layout or open files. To overcome this issue, you can use Obsession to manage all session-related stuff for you.

Remember always to start Vim with vim -S to use your Session.vim file.

Code Completion

By default, Vim can complete based on dictionaries, opened buffers, and tags files; this can fall short compared to other text editors.

Also, this is a controversial subject for Vim users, and there is no unanimous verdict on what a completion tool should or shouldn’t do, I’ll list a couple of plugins for code completion:

  • COC has support for a lot of languages, function signature completion, and a handful of extensions.
  • youcompleteme has a lot of search engines for different languages.
  • Supertab allows you to use the complete tool using the tab key.

Code linters

There are a bunch of useful plugins for code linting, but the following two are my picks:

Whichever plugin you want to use will work (I use Neomake).

Code navigation

If you want to go directly to a function declaration, Vim has built-in support for it. Vim takes advantage of the tags file, but you need a tool to generate that file. The downside of it is that you have to update that file every time you make a significant change on your codebase, to avoid that tedious process, use Gutentags, which handles all the tags-related tasks.

To go to a code declaration, position the cursor on the specific method or class name and use ctrl + ], if there’s a tags file with that definition, it will move you directly to its declaration.

 

tags vim

 

Code snippets

I don’t use snippets, but if you do, there are a lot of plugins for it. The most known and used is the Snipmate & Nltisnip Snippets plugin.

Also, for commenting code, you can use the NERD Commenter plugin.

Pairing sessions

Vim by itself, doesn’t have remote pairing capabilities. Still, it can be achieved with tmate for fast SSH session creation with read-only capabilities (uses your Tmux configuration if you already have it).

GIT integration

Since you can execute arbitrary commands in Vim, you can communicate with GIT, but that is not that fun, there are a couple of plugins to enhance the GIT capabilities inside Vim:

  • NERDTree Git Plugin shows visual elements to your NERDTree, so you know that something is going on with them
  • Vim Gutter uses the status column to highlight new, edited, and deleted lines in your open buffers, show differences, etc.
  • Fugitive to call any GIT command inside VIm, with fantastic diff, log, and blame capabilities.

Spelling and Dictionary

Vim has built-in capabilities for spell-checking, highlighting, and correction. To enable them in normal mode enter :set spell spelllang=en_us. Use [s and ]s to navigate between spelling errors and z= to show the correction list.

 

vim spelling_check

 

Vim also has built-in Dictionary completion, to use it, your system should have a words file, in Linux by default there’s a dictionary file at /usr/share/dict/words (you can download them), 

in normal mode enter :set dictionary+=/usr/share/dict/words to append it to existing dictionaries.

 

vim dictionary_complete

 

Use ctrl + x and ctrl + k to open the completion modal, use the ctrl + p and ctrl + n shortcuts to navigate it.

Large file editing

Vim tends to choke when you open a large file due to all the line numeration, syntax highlighting, and all the plugins working on that specific file. To edit a particular file, you can disable the tools mentioned above and then edit the file, or use the LargeFile plugin to do it for you.

 

Conclusion

All of this seems like a lot, and you may ask, why should I even care to do all of this if Visual Studio Code, or “Insert my favorite code editor” does that and more right out of the box? 

Well, the thing here is that Vim allows you to tailor the text editor as you please. Without significant shortcomings, Vim can be faster than any other code editor you have ever used, ready to be used in almost any Unix system that you come across (and windows too).

For me, it is the experience, that rewarding feeling that you get when you happen to do a tedious thing with a couple of commands; tools make the work experience, I’m sure that woodworkers don’t like a dull saw nor painters like cheap brushes.

 

Useful blog posts and resources:

 

 

 

 

0 1 Continue Reading →

Memoization: Faster Ruby app

The performance impact that your code could have in a certain programming language and/or platform is often overlooked, which might make you prone to blame those tools, instead of analyzing what and how your implementation made your application slower. Memoization may help you to improve your application speed with a non-significant impact on readability or flexibility on your code when appropriately used.

What is Memoization?

Memoization is a technique where you can cache a process result in order to use it later without the need to run that process again. This has the potential to save a massive amount of computing/networking power, thus improving the general performance of your application.

How does it work in Ruby?

You can memoize a value in Ruby using instance variables because they live long after a method has finished.

And here comes the magic of Ruby: the ||=  (or equals) operator. Since instance variables won’t rise an exception when accessed before being declared, you can use the “or equals” operator:

Here we are saying to Ruby:

“If you don’t have a truthy value for @value variable, then assign it the value at the right”

With this knowledge you can extrapolate it to an instance method:

As long as user instance reference exists, that value will still be on memory.

This is a simple example, but what would happen if instead of a string value, that assignment made a database query? Or an API call? A heavy object creation?

Here’s an example of a dummy user model with a dummy database connection object:

How to update cached values

There’s not a “clean” way of updating a cached value, the only way is to directly override the instance variable like so:

Keep in mind that memoized values’ ideal use case is for never-changing values.

Formatting and styling

When you need to memoize a certain not-so-straightforward value, you tend to write code like this:

Here you need to pay special attention to what each line of code does: 

“Did you finish the routine early because a certain value is not present?”

“Is this just an assignment method?”

You can certainly rewrite the code so it removes that unwanted cognitive load from you:

This will work; it does what it says it does, but now you have created a rather verbose method to encapsulate an obvious task.

Finally, you can rewrite it, as this is taking advantage of a ruby block:

This way, the code clearly expressed its intention, and you didn’t need a special method for it, you just used a block that does what the previous calculate_value method did. Now, within the same method, you read: 

“Oh, this is a memoized value”
“…And here is the source of that value”

Conclusion

As you may see, Memoization can improve your application’s performance, and the greatest thing about it is that it comes with a little to no-downside on your code readability.

Remember, to use it properly, you need to find the kind of calculated values and/or reused objects which are being processed expecting the same result over and over again in your application, then memoize them and enjoy your milliseconds of saved time!

 

 

0 5 Continue Reading →

A thoughtful insight into why Redis

 

It’s common for us developers or technology enthusiasts to encounter technology that we can use right out of the box without really knowing what that thing actually does. It was not long ago when I was first introduced to Redis, this magical thing that solves all sorts of problems. 

I was working on a Rails application when I first heard about Redis, but I wasn’t really trying to work with Redis per se, I was more likely trying to make Sidekiq work; a background processor for Ruby that needs Redis in order to work as I had always been expecting things to work, out of the box.

If you are not familiar with background processors on web applications, it is basically a function whose result is not imminent and can be postponed; this is because the end-user can live without immediately knowing the result, such as resizing their profile picture, or a more commonly known example, when you upload a video to Youtube and it tells you to come back later for the result.


The basics

Using the same example above, let’s think about how we can make this work in case we wanted to build our own solution for background processes. The best and most simple solution would be:

1. During a request, identify what information should be delivered ipso facto and what can be processed later.
2. Store information about what can be processed later, of course, the obvious answer here is to use our database, whose only job is to do so.
3. Run a different process than our main application (the one receiving the request); it will be monitoring the database in case something needs to be processed.
4. Process whatever needs to be processed.
5. Profit.

In case you are wondering, as I was wondering at some point: what is this process that will be checking for background processes in the database? Well, it will be most likely another instance of your main application, but configured to run without a web server listening for requests, and instead, a routine to check for new background processes to run, this way the worker (main application, without the fancy web server) will have access to all the goodies you have already written to make your life easier, models, serializers, etc. Or in the case of Rails developers, let the magic flow to our worker.


The problem

Ok, so it seems like we have the solution here, so why bother to even think about an alternative for any of the 5 easy steps to build our worker? Time will be wise and when you reach your millionth user, you will notice that your database is starting to have problems reading and writing all the information your inconsiderate users want your application to store and retrieve for them, and on top of that, the worker is always trying to access the database in hope of finding more background processes to run; after all, the database is the only source of truth for knowing the next background process to be run.

Let’s say that at some point, retrieving a row from our background processes table takes x amount of time; a rough explanation of how our database manager is accessing the data will be the following:

1. It listens for a query request through the configured port.
2. It processes the query to know what to fetch from disk (a more complete explanation of this step can be found here).
3. It asks the OS to read certain sectors on the disk.
4. It reads the sectors from disk.
5. The OS returns the data from the disk.
6. It stores the rows in memory.
7. It sends the information back to the requester.

The key steps here are 3-6. If the database manager wants to access data written in the disk, that petition needs to be made to the Operating System in order to access it. And not only that, we depend on the availability of the disk’s reading capabilities, which can vary depending on the hardware used in our server. For the sake of simplicity, let’s say these 4 steps take 4/7x of the time necessary to return the requested information. So if our request takes 1s to be completed, almost 60% of that time was used to retrieve information from disk and be stored into memory. Wouldn’t it be better if we could have that information already in memory from the beginning? Well, the time you had been waiting for is here, I present to you: Redis.


The solution

Redis is, from the front page of its site:


“an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker”.

The secret sauce here is: “in-memory”. Redis always stores the data in-memory (RAM) so the response time of any petition can be lightning-fast and can be accessed in the same fashion as a normal database, such as MySQL.

Redis also has the capability of doing a backup of what it stored in memory to disk, so in case you need to restart the machine running the Redis service, you won’t lose anything, as long as the Redis service finishes to do this backup. After the restart or in the case of a failure, when the Redis service starts again, it will populate the in-memory database with the backup made into the disk.


How it works

You won’t believe the simplicity of Redis, it is just, wait for it… a dictionary (a map if you will), so it is a concept really easy to grasp. You basically set keys with a certain value, and that value can be from a simple string to a member of an ordered set. Redis has a really impressive catalog of structures that it can manage; a complete list of these structures can be found here.

Every structure has its own set of commands, for example, to set and get a simple string from Redis, you will do something like:

 

 

Usage

Just as our first example, we can use Redis to store what our worker needs to start running background processes, in a sense:

1. Our main application writes into Redis a descriptor of the background process.
2. Our worker reads the descriptor and runs the background process.

Another interesting use of Redis is when a third-party service we are using works using authentication tokens. Basically, you make an auth request to this third-party service, and it will be returning a refresh token and an authentication token that will work for a certain amount of time, let’s say 1 hour. So, where should we store these tokens inside our app? You guessed it, Redis; so that it can be accessed later for other requests, or even be used by our worker.


Conclusion

There are limitless situations that will fall in the Redis use cases. As always, in technology, Redis is not the magic solution for everything, experience will tell you if Redis is a fit for what you are doing. And of course, you can always create your own solution using Redis, but I bet there is at least one project out there that will fit you just right, but if you are really passionate about a deep understanding of how things work, go for it, in the end, practice makes the master.

I hope you find this information useful to make an informed decision in the future, and feel free to comment here what your results were when using Redis, or if you have a cool alternative to it.

 

0 1 Continue Reading →

6 reasons why you should stop using Java and use Kotlin instead

 

As an Android developer who started working with Android 4 years ago, I had to learn Java in order to create native applications. For the first two years, I learned a lot about Java and I started to feel that I was getting good at it. During this period, I heard you could use Kotlin to create Android apps but I always thought “There’s no way Google would deprecate Java, it’s their main language.”

A year passed, and first-class support for Kotlin was announced at Google I/O 2017. In 2018 Kotlin was voted as the second most loved programming language (StackOverflow).

 

Most Loved, Dreaded, and Wanted Languages

StackOverflow. (2018). Developer survey 2018. [Image]. Retrieved from https://insights.stackoverflow.com/survey/2018

 

Additionally, according to Pusher in “The state of Kotlin 2018”, after the official announcement, the Android community has been migrating from Java to Kotlin rapidly.

 

Kotlin’s growth
Pusher. (2018). The state of Kotlin 2018. [Graphic]. Retrieved from https://pusher.com/state-of-kotlin

 

A few months after Google I/O 2017, I was still using Java because I thought Google would not deprecate it, and you could still use it to make apps, so I thought Kotlin was in its “baby steps”.

Even when I kept thinking I should wait a little bit more, I did not want to lose an opportunity to learn a new programming language, so I decided to experiment a bit. I started reading articles, blog posts, and following tutorials. After a while, I started to get bored because I was only doing tutorials or easy examples of Kotlin. Following tutorials are a good start, but it only shows you the basics of a certain topic, and I really wanted to apply all the new things that I had been learning to a real project. My prayers were heard and I was assigned to a new project in which I had to start from scratch, so I took the risk to use a new language in a real project and learn on the way.

To my surprise, learning Kotlin was unexpectedly easy thanks to my Java background. If you’re asking yourself, “Should I use Kotlin instead of Java?” Well, yes, you should! But, why?

 

Here are some advantages of Kotlin over Java that to consider:

 

1. Say goodbye to NullPointerException

The NullPointerException is the most common exception for a Java developer that Kotlin wants to eliminate by using Nullable types. A nullable type is an object that can be either null or not, and it is defined by the question mark (?). 

If you try to access some property of a certain String, e.g., the length of b, the Kotlin compiler won’t allow you to compile your code because it detects a prominent NullPointerException (NPE). In order to know the length of a nullable string, you have to tell the compiler that it can access that property only if it is different from null by adding the question mark.

The main benefit of using nullable types is that they give us the chance to avoid an unexpected NPE before our app crashes in runtime.

Nullable types provide this benefit due to Kotlin, which detects a possible NPE and Android Studio won’t let us compile our code until we fix it, ensuring that our application is null-safe. However, you may keep in mind that Kotlin does not solve the NPE issue, but it forces you to prevent this exception when your app is running.

 

2. Extension functions

An extension function is a way in which you can add new functionality to an existing class without having it inherit from the class. For example, if you want to remove the last character of a String:


3. Reduce boilerplate 

Boilerplate is the repetitive code that you see in almost every place of your project. A really cool example to identify a repetitive code is a POJO class in Java.

You can create the same POJO class in Kotlin by defining a data class. 


4. Data class

The example above is using a data class; we can use a data class when the main purpose of the class only holds data. Likewise, it gives us the chance to avoid adding unnecessary code to every class we need.

 

5. Interoperability

Kotlin can work with Java in the same project without any trouble because both languages have similar bytecode structure. If you are migrating your app or you want to add a feature (built with Kotlin) in your existing Java application you can do it.

 

6. Android Studio IDE

Android studio has grown a lot, and its compatibility with Kotlin is excellent. Also, it has a feature which lets you convert Java code into Kotlin code by just doing a simple copy/paste, or with a few clicks. This is a useful advantage if you are considering migrating your Java app, but keep in mind that you have to be careful because you are going to use Android Studio to migrate your full code. I prefer to use this feature only to convert little pieces of code.

 

It’s not all a bed of roses.

Kotlin is not a perfect language. In fact, it has cons that you may be interested in knowing before choosing it as your main language to develop Android applications.


Learning curve if you are not a Java developer

I mentioned that if you are a Java developer, the learning curve is effortless but, it may be difficult if you are learning on your own without having an expert Kotlin developer helping you or without any Java experience.

Speed compilation
When we talk about speed in compilation time, Kotlin seems to be slower than Java in some cases when trying to perform clean builds.

Find a Kotlin expert
There are a lot of Kotlin developers available in the market, but if you want to find an experienced mentor that helps you improve your skills, it may be a difficult task.

A few learning resources
As I mentioned before, the Kotlin community has been growing fast, but even so, finding a good learning resource could be difficult if we compare it with Java.

If you are interested in learning Kotlin but you don’t know how to start or where to start, you can visit these links to learn more about it:

  • The official Kotlin page is the official documentation that Kotlin gives us. It is easy to read and it has a lot of good examples.
  • Android code labs are a bunch of awesome tutorials made by Google developers.
  • Android Weekly is a free newsletter that helps you to stay cutting-edge with your Android Development skills. Every week you will receive an email with several topics, tutorials, and posts about Android development.

 

Conclusion

Everything about Kotlin has been fascinating. I have encountered zero issues with this language or any implementation that I want to add to my projects. I have been able to solve every challenge I have faced by using Kotlin, without the necessity of adding any Java to my code. 

I do recommend you look for guides when starting to use Kotlin, especially if you’re not a Java developer. Getting help could fast track your learning. Receiving tips, getting feedback, or maybe being told the solution you implemented is not the best; every padawan always needs a Jedi that guides them to the light side of the force. 

If you are still using Java, my suggestion is to try Kotlin. If you have been following Google’s I/O since 2017, you have noticed all examples about new implementations and tools are using Kotlin. If you are a beginner and looking for a language to learn, choose Kotlin and do not worry; there is a lot of good information about how to start. 

 

https://pusher.com/state-of-kotlin#adoption
https://kotlinlang.org/docs/reference/android-overview.html
https://kotlinlang.org/docs/reference/data-classes.html
https://kotlinlang.org/docs/reference/extensions.html
https://insights.stackoverflow.com/survey/2018
https://www.netguru.com/blog/kotlin-java-which-one-you-should-choose-for-your-next-android-app

0 7 Continue Reading →

How to conquer legacy code and not die trying

As a software engineer, I know how frustrating it can be to work with legacy code, especially when your client has no idea of the level of technical debt you are inheriting, and wants you to deliver bug fixes and new features as soon as possible. If you’re as passionate about software development as I am, you’re supposed to enjoy it, not hate it. That’s why I’m writing this blog post: To share my experience and a key piece of advice about how to deal with it.

The most common issues of working with legacy code are:

  • Having no tests at all, or no useful tests.
  • Outdated dependencies.
  • Poor software architecture.
  • Technical debt.
  • Lack of documentation.

Here are some recommendations about how to deal with it and not to die in the attempt.

Risk Assessment

After performing this assessment, you’ll be aware of all the risks you’re taking (or inheriting). In the end, you’ll have to decide how comfortable you are with the given risks. Therefore it’s super important to know the current state of the project and to be aware of what to expect when it’s time to get your hands on the code.

The goal of this assessment is to learn as much as possible of the current codebase. My recommendation is to focus on the following:

  • Documentation: Unfortunately, most of the time the only documentation that you might find in an existing project is the README file, and even worse, this file is usually not up to date. Look for architecture diagrams, wikis, or any other documentation that can help you to understand the codebase at a glance.
  • Test coverage: Speaking of documentation, tests are considered the best code documentation. You should check for the current test coverage, but more importantly, check the quality of the tests. Sometimes tests check against specific text instead of testing the business logic that drives that text to be displayed. If there are good quality tests, you should be able to have a better sense of the business logic, and moreover, the current architecture.
  • Dependencies: Having a ton of dependencies is not a good sign. I always try to avoid adding a new dependency unless it’s strictly necessary, or the benefits of said dependency exceed the cost of making it happen. Check how outdated dependencies are and how difficult it would be to do upgrades. Pay extra attention when it comes to deprecated versions and you need to do major upgrades.
  • Deployment process: A new feature, bug fix, or improvement is not considered done until it reaches the production environment. You need to understand what the deployment process is since you have to take this into account while giving estimations.
  • Backlog: After getting a little bit familiar with the codebase, you need to know what’s about to come. You might be asked to add a particular feature that might not be that easy to add given the current architecture or the version of the dependencies implemented. The value of knowing the backlog beforehand is to raise any warning flags in the early stages so that you can plan.After going through each of the above points, you should be able to communicate any risks to your client. Set clear expectations and decide whether to move forward or not based on your discoveries.

Buy insurance

If the test coverage isn’t the best, you should ask to work on adding tests before adding new features. You need to be confident enough that you’re not adding new bugs while changing the codebase, and the only way to be sure is to have a proper test suite. Good test coverage is your insurance as a developer.

Refactor on the go

I know that you might be tempted to do a major refactor as soon as you start touching the codebase; however, based on my experience, that is not always the best option. A big refactor can take forever; it’s like a chain reaction. You start with a few files or lines of code, which then scales so quickly that you’re suddenly in a situation where you’ve already refactored hundreds of files with no end in sight.

A better option is to do small refactors on the go. Refactor the code you’re touching based on the features you’re working on.

The approach that I like to follow in this case is one of the SOLID principles, Open-Closed principle, which suggests that a class should be open for extension and closed for modifications. If you can’t extend an existing class or file, then create a new one following this principle, and use it only when it’s required. Avoid changing existing implementations since it can scale quickly, and you might get lost in the refactoring instead of focusing on delivering the feature.

Conclusion

Dealing with legacy code shouldn’t be that painful. It depends on your approach to working with it.

Here are the things that you should do before starting the feature development:

  • Read the existing codebase.
  • Analyze the current backlog.
  • Add tests until the point you feel confident enough.
  • Set clear expectations to your client.


Once you’ve already started the development process, these are the things you should bear in mind:

  • Avoid major refactors; instead, do refactor on the go.
  • Add tests to any single feature or bug-fix you work on.

Let me know your thoughts in the comments section. If you have any other suggestions about how to deal with legacy code, feel free to share it with us.

 

0 5 Continue Reading →

Design thinking for Developers (Workshop, part 2 of 2)

The following video is the core content of the second part of a Desing Thinking workshop imparted by our friend Michael Gharabiklou, Founder/Managing Partner at Quor Studio. You can watch part one here.

 

0 0 Continue Reading →

Design thinking for Developers (Workshop, part 1 of 2)

On his visit to TangoSource’s México’s Headquarters, our friend Michael Gharabiklou, Founder/Managing Partner at Quor Studio, shared his knowledge about Design Thinking with us.

The following video is the core content of a 3-hour workshop and the first part of a 2-video series. It Includes the Design Thinking process and the importance of empathy in this methodology, among other topics.

 

 

0 0 Continue Reading →

The Web Design Process in Agile Methodologies

As TangoSource’s role description says, web designers are architects of the web. They focus on the look and feel of the website; and so, they should be visual arts experts, who are skilled in color scheming, graphic design, UX and information flow. However, this is just a glance at the whole knowledge baggage that a web designer should own and dominate in order to create good and effective web solutions.

The web design process is not a linear path, but a convergence of multiple disciplines, techniques, and methodologies, which have to adapt to every project’s needs, whilst keeping up with the new technologies coming up every day. It involves and integrates several disciplines as Information Architecture, SEO, User Experience Design, User Interface Design, Graphic Design, Product Design, User Centered Design, and Front-End Development, to mention some of them.

However, many of the sources I’ve reached on the internet address web design just as development’s initial step, before starting to write the code. This development method approach is called Waterfall, in which every step leads to the next one, and every part of the team works separately and sequentially.

The Waterfall method starts with a project plan. Before any part of web development can begin, designers must have a clear vision of the final product in mind. The planning stage is upfront and quite extensive, but one of Waterfall’s benefits is that most of the times, web developers can accurately estimate the timetable and budget for a project.

This workflow approach was the most popular one until a couple of decades ago. And it certainly has its benefits, but one of its disadvantages is its lack of flexibility, as it doesn’t allow an open communication with the client and the rest of the team throughout the process. In many of the cases, if any part of the initial design changes, the team has to rethink the whole project.

Waterfall Model

Waterfall Model

During my recent experience working at TangoSource, I’ve had the opportunity to work with the Scrum Framework, one of the web development Agile Methodologies approaches which work through iterative, incremental cycles. It allows designers, developers, project managers and stakeholders to actively participate in the whole process, encouraging an open communication through it. Something to always keep in mind is that getting feedback from different perspectives is really valuable, as you gain team and customer insights for a faster testing and ideation.

In other words, in an Agile environment, the phases run parallel instead of following each other, as in Waterfall methodology. This way, design, development, and testing all occur at the same time, as the product is divided into smaller and independent parts, called Sprints. These Sprints can be individually released, according to the continuously evolving requirements and solutions, defined by the project’s needs. This also speeds up the feedback process and mistakes come to light, making bugs easier to identify and reducing time and money waste. Sounds idyllic, right?

Agile Model

Agile Model

These principles constitute the basis of the Lean Startup Mindset, known as the 3 step process: Build, Measure, Learn Feedback Loop. This infinite informational loop gives business the opportunity to create, launch, learn and iterate on new and old products, and it works as follows:

Build – Create a Minimum Viable Product (MVP) based on a set of assumptions.

Measure – Launch MVP and measure your hypothesis based on your assumptions.

Learn – Accept or reject your initial hypothesis and iterate on the MVP.

Build Measure Learn Loop

Build Measure Learn Loop

The essence of the Lean Startup Methodology consists of helping reduce time and risks associated with launching a new product to the market, by taking advantage of the efficient use of resources, and continuously testing said product for market risk. This is done by guaranteeing user’s interest before building the solution and testing for initial user engagement while continuing working on improving the solution to an identified problem.

As we have seen, Lean Startup Mindset and Agile Methodology share some principles and adapt them to their own techniques. This doesn’t mean that you have to choose one over another, but you can rather take from each one the tools that best fit you since they have their own way to add value to your own process.

In his article A Designer’s Introduction to “Agile” Methodology, Joel –who performs the creative work in a consultancy– gives us an insight into this, speaking from his own experience:

 

As a designer, making the transition from freelancer to working within a big company with multiple teams on Agile projects, can be a big leap. In my experience, it is a useful framework to work within and its principles can even be used in your own personal projects.

– Joel

 

If you’ve heard about the term ‘Agile Design’, it is simply the application of some of the agile development principles to the design process. One of this flexible framework advantages is that it can be easily adapted, according to different workflows, teams, and projects needs.

On the other hand, the incremental cycles approach can present some difficulties on the design end. One of the risks that a web designer might face while working with iterative processes, is potentially finding visual style inconsistencies between the early stages of the project and the final outcome. Especially, if there wasn’t a document defining the style guidelines from the beginning.

For this matter, defining a style guide is one of the best ways to ensure visual consistency. A style guide is an equivalent of creating a development documentation in which every project’s guideline is promptly defined. This guarantees us focusing every effort in the right direction and avoiding style incongruities during the whole process, or whenever future development or third-party production intervenes.

 

“The Agile approach of Continuous Integration is reflected in the use of style guides, because creating content first and then using it to build a flexible framework reflects the iterative and cyclical nature of Agile development. The iterative format comes from the ability to experiment, test and tweak the way the coded elements are used in a layout.”

– Luke Clum

 

If you want to dive a little more into this, Tom Greever has given us nine valuable Principles Of Design Implementation, which perfectly adapt to the iterative process design essence, and on which every design team can base to create their own style documentation.

These days, you can find several different tools out there to help you ensure not only visual consistency but also responsiveness and cross-browser functionalities. A great resource to use is the website styleguides.io, where you can find articles, books, and some other tools to help you as a starting point in your extensive quest.

Being aware of design principles is crucial for an effective digital communication, and therefore, a successful product. But it is also important to understand that visuals don’t construct the whole design, but just a part of it. There always has to be a solid communicative speech on which design decisions can rely.

As Matt Meazey said in his article The Web Design Process in 7 Simple Steps:

 

Web design — and really, design in general — is all about finding the right balance between form and function. You need to use the right fonts, colors, and design motifs. But the way people navigate and experience your site is just as important.

– Matt Meazey

 

Certainly, there’s more than one way to do things, but there is one main focus which has to remain as the center of our thinking process: creating real solutions for digital communication. Innovation is a plus. Always remember that as every project and team is different, you’re best off choosing the methods that work for you, and adapting them on the road.

0 14 Continue Reading →

Why “Geek Out” is a key part of the TangoSource company culture

“Geek out” might seem like a strange and vague company value, so why write an entire blog post about that value alone? I recently met with a consultant who came down to our main development office in Colima a few months ago, and we were discussing her experience with us.  By comparison to different service firms in Mexico, some who have their employees drink the Kool-Aid, she felt that a key advantage for us is one particular cultural element: Geek Out.

When I wrote our company culture document a few years ago, I put a lot of myself into it, based on my reality at the time and my aspirations for what a nearshore product development shop could be. After mulling it over for a few days, I had my list, and I realized it didn’t paint the complete picture of who we are and how we work.  I added Geek Out as the last value on the list (at the time) after thinking about how much our team likes gaming, how we’ve all got different personal interests, and how we support each other’s personal expression.

 

 

Geeking out means to celebrate all of our respective individuality. In my experience, gaming helps people learn how to communicate and strategize flexibly and effectively. As you learn the rules of new games, you have to adapt. Each game has a variety of hard and soft skills, and tabletop games require that you play both the rules, and the other players. The same is true for music, dance, cooking, sports, and pretty much anything that people are passionate about.

Product and software development could be considered one of the most challenging and rewarding games in the world.  It takes years to develop the hard skills which can be more easily observed: Is your code clear, maintainable, scalable, precise, and well architected? It also takes a lot of soft skills to be more than just an efficient code monkey: empathy, proactive communication, knowing what’s going on in tech, and personal ownership of the product and customer success. To be a high-level product person means putting your entire self and experiences into your work.

So if you work with us, or want to, please understand the intent of Geek Out. The best people bring their all into what they do. Rather than stifle individuality, I feel we should help it grow and mature, for all of us to enjoy and learn from.

 

0 9 Continue Reading →

Rails basic API authentication with Sorcery and JWT

APIs have become more and more popular due to increasing demand for development of mobile and single page apps, and so the need of sessionless server-side applications, being those that are token-based the most popular ones now days due to their relative easy implementation. Here you have a great example that uses JWT to secure a microservice-based application.

JWT (or jot) stands for JSON Web Token and as it states it is a web token structured in a JSON format. JWTs are composed of 3 parts: the header, the payload, and the signature. In short, the header contains the type of token and the algorithm for the cryptographic signing; the payload (or claim) contains the information wanted to securely send to the server, and the signature which contains the secret key known only by the server and that is used to decrypt the data contained in the payload. It is not the primary objective of this post to explain in detail what JWT is, so if you want to know more about it you can do it here.

JWTs are used in the next way:

  1. The API server generates the JWT.
  2. The JWT is sent to the client.
  3. The client saves the JWT (i.e. in LocalStorage).
  4. The client sends the JWT back –we’ll send it on a request’s header– to the server.
  5. The server uses the JWT’s payload data for X or Y purpose. In our case to identify the user that is performing the request.

 

Besides JWT, we need an authentication strategy to actually identify users. We’ll use Sorcery for this purpose; I recommend you to go ahead and read the documentation if you haven’t done so. You can implement any other authentication library –or build your own– but for this particular case I personally like Sorcery because it offers all basic authentication features and it is not as overkilling as other libraries.

This is how the general authentication flow looks like:

jwt-diagram

Fig. 1 JWT Flow

Conveniently there is a jwt library that we can use to generate user’s token, so go ahead, read its documentation and install it in your project.

Next, we’ll create a service to provide the JWT, another one to validate it, and a last one to authenticate the user.

app/services/jwt/token_provider.rb

app/services/jwt/token_decriptor.rb

app/services/jwt/user_authenticator.rb

NOTE: In this case we are assuming that JWT is in ‘Authorization’ header with format ‘Bearer xxxx.yyyy.zzzz’, that’s why we are splitting @request_headers['Authorization'].

Implementing our services

We need to generate the JWT after user has successfully logged in:

Or signed up:

The token returned in the response should be properly saved in the client so it is sent back to the server in subsequent requests’ headers.

Then we need to authenticate user on future requests. Adding an authentication method on ApplicationController will let us use it on ApplicationController’s children controllers to protect our private endpoints:

And that’s it, our API is now secured using a sessionless approach. From this point on you can add more complexity using jwt and Sorcery‘s methods to, for instance, make token expire after 1 hour or reset user’s password.

Let me know in the comments section below what you think about this strategy.

0 11 Continue Reading →