Skip to Content

Category Archives: Development

Vim: A sense of project

Vim (or Neovim) is an excellent text editor, it is lightweight, ubiquitous and customizable in its core, but those are secondary traits, what makes Vim shine are its fast-writing capabilities with powerful keyboard-centered composable commands.

Despite its benefits, there is a list of things which Vim can’t do like:

  • Advanced file management.
  • GIT integration.
  • Advanced code completion.
  • Code linting.
  • Pairing sessions.
  • Global find and replace.

All of those are indispensable tools for everyday development. Are Vim users condemned to live without them? Not at all, and I’ll show you how you can create a better Vim set up so you can access the tools that you value the most and without losing the power of Vim.

Disclaimer: This post is not a total-beginners guide to Vim. If you have previously configured Vim, then you are good to go.


Vim’s built-in customizability allows the community to create plugins for it, and VimAwesome is the Hub for almost all public plugins available, check it out, as its name suggests, it is awesome (you can navigate with the Vim keys).

Adding plugins to Vim is straightforward:

  1. Download it.
  2. Add it on your config file (.vimrc or init.vim) by sourcing it.
  3. Source again your config file or close/open Vim.
  4. Enjoy!

Even though it is not hard to do it, it is tedious, so I recommend you to use a Plugin manager like Plug or Vundle. I recommend Plug because it is the one I have used.


File management


Vim comes out of the box with a file explorer named netrw, in order to use it, you can enter :Exp or :Explorer on normal mode. It will show a list of files in the current directory (you can navigate them with the usual keys), if you open a file and want to get back to the file explorer, you can use :Rex in normal mode.




It seems that you can’t create new files or directories from it. That’s because it is such an obscure tool to use. Use :help Explore or press F1 on the explore screen to see the full list of commands.

For example, to create a new file in the current directory, use the % key; this will prompt you to name a new file.




You can sort the explorer entries, open them on split or on another tab, hide dot-files, show/hide files with custom filters, create bookmarks, execute arbitrary commands to marked files, and a few other exciting commands.


If you happen to love the way trees present your project’s files, you may be better with the NERDTree plugin, a powerful tool for file management.
NERDTree allows a lot of plugins to enhance its capabilities; I’ll show you a few of them later on.


nerd tree new file


You can create a shortcut to fast-toggle the NERDTree, in your config file:

Also, you can see the plugin documentation pressing ? when NERDTree is open.


And what about searching for files by their name or contents? Here comes FZF, it allows you to use the FZF tool for fuzzy file searching, and on top of that, it allows you to use any other searcher like ‘git grep’, ripgrep or the silversearcher (I use the ripgrep tool).




You can create a shortcut the same way as on the well-known ctrlp finder, for doing so, you need to map the FZF plugin to the ctrl-p shortcut (and also configure FZF to use ripgrep as its default command):

You can use the Rg tool to search for file contents.


fzf ctrl s


You can map that command to ctrl-s:

Project-wide find and replace

Project-wide find and replace might be the hardest thing to achieve in Vim without feeling that you did something that you shouldn’t have. There are a lot of ways of doing this. I’ll present to you the easiest one (though, not the best).

By default, Vim comes with :grep and :vimgrep, they both allow you to filter files by their content and save that list in the quickfist list. Once screened, you can make replacements only in those files by using the :cfdo command, for example, to find and replace the function named read_file:

  1. First, you configure Vim’s hidden feature :set hidden to allow you to edit files without errors for not having saved them.
  2. This step is not needed, but makes the process faster by only processing the required files (select only ruby files with read_file in them) :grep -r read_file *.rb
  3. And finally replacing them with :cfdo %s/read_file/read/gc | update


vim find_and_replace


This method uses the Unix’s grep tool and won’t omit files that you may no want to use like node_modules or ignored files by your source control system; I’ll leave a couple of interesting posts at the end of this one so you can read more about this find-and-replace stuff.

Project workspace

You close Vim and call it a day, next morning, when you open the editor you probably have forgotten which files you were working on and how you arranged them. Well, you don’t have to go through that hassle anymore, Vim has sessions that allow you to save your workplace as you left them the last time.

In normal mode, enter :mksession to create a new session file in your working directory (by default called Session.vim), if you want to open Vim with that specific session, open Vim with the -S flag and you are good to go.


Vim session


The downside of using sessions is that you have to override them every time you have to change your workspace layout or open files. To overcome this issue, you can use Obsession to manage all session-related stuff for you.

Remember always to start Vim with vim -S to use your Session.vim file.

Code Completion

By default, Vim can complete based on dictionaries, opened buffers, and tags files; this can fall short compared to other text editors.

Also, this is a controversial subject for Vim users, and there is no unanimous verdict on what a completion tool should or shouldn’t do, I’ll list a couple of plugins for code completion:

  • COC has support for a lot of languages, function signature completion, and a handful of extensions.
  • youcompleteme has a lot of search engines for different languages.
  • Supertab allows you to use the complete tool using the tab key.

Code linters

There are a bunch of useful plugins for code linting, but the following two are my picks:

Whichever plugin you want to use will work (I use Neomake).

Code navigation

If you want to go directly to a function declaration, Vim has built-in support for it. Vim takes advantage of the tags file, but you need a tool to generate that file. The downside of it is that you have to update that file every time you make a significant change on your codebase, to avoid that tedious process, use Gutentags, which handles all the tags-related tasks.

To go to a code declaration, position the cursor on the specific method or class name and use ctrl + ], if there’s a tags file with that definition, it will move you directly to its declaration.


tags vim


Code snippets

I don’t use snippets, but if you do, there are a lot of plugins for it. The most known and used is the Snipmate & Nltisnip Snippets plugin.

Also, for commenting code, you can use the NERD Commenter plugin.

Pairing sessions

Vim by itself, doesn’t have remote pairing capabilities. Still, it can be achieved with tmate for fast SSH session creation with read-only capabilities (uses your Tmux configuration if you already have it).

GIT integration

Since you can execute arbitrary commands in Vim, you can communicate with GIT, but that is not that fun, there are a couple of plugins to enhance the GIT capabilities inside Vim:

  • NERDTree Git Plugin shows visual elements to your NERDTree, so you know that something is going on with them
  • Vim Gutter uses the status column to highlight new, edited, and deleted lines in your open buffers, show differences, etc.
  • Fugitive to call any GIT command inside VIm, with fantastic diff, log, and blame capabilities.

Spelling and Dictionary

Vim has built-in capabilities for spell-checking, highlighting, and correction. To enable them in normal mode enter :set spell spelllang=en_us. Use [s and ]s to navigate between spelling errors and z= to show the correction list.


vim spelling_check


Vim also has built-in Dictionary completion, to use it, your system should have a words file, in Linux by default there’s a dictionary file at /usr/share/dict/words (you can download them), 

in normal mode enter :set dictionary+=/usr/share/dict/words to append it to existing dictionaries.


vim dictionary_complete


Use ctrl + x and ctrl + k to open the completion modal, use the ctrl + p and ctrl + n shortcuts to navigate it.

Large file editing

Vim tends to choke when you open a large file due to all the line numeration, syntax highlighting, and all the plugins working on that specific file. To edit a particular file, you can disable the tools mentioned above and then edit the file, or use the LargeFile plugin to do it for you.



All of this seems like a lot, and you may ask, why should I even care to do all of this if Visual Studio Code, or “Insert my favorite code editor” does that and more right out of the box? 

Well, the thing here is that Vim allows you to tailor the text editor as you please. Without significant shortcomings, Vim can be faster than any other code editor you have ever used, ready to be used in almost any Unix system that you come across (and windows too).

For me, it is the experience, that rewarding feeling that you get when you happen to do a tedious thing with a couple of commands; tools make the work experience, I’m sure that woodworkers don’t like a dull saw nor painters like cheap brushes.


Useful blog posts and resources:





0 1 Continue Reading →

A quick guide to using Mapbox in Android apps


Google maps has been one of the developers’ preferred choices when thinking of adding maps to their applications but as prices increased and a reduction of free API calls took place, developers had to start looking for an alternative that let them work with a cheaper map tool and that’s when Mapbox started to gain popularity.

Mapbox is one of the largest platforms that provide designed maps for websites and mobile applications; according to its documentation, it offers features such as maps, search, and navigation as well as better map customization compared to other options such as LeafLet or OpenLayers (If you want to know how many maps platforms exists you can visits this link). It’s powered by OpenStreetMap, a massive collaborative project to create free, editable maps.


What you should already know

This tutorial assumes a basic knowledge of:

  • Kotlin
  • MVVM (Model-View-ViewModel)
  • Android coroutines
  • Android Studio


What you’ll learn

In this tutorial, we’ll cover the most common implementations of Mapbox in our applications:

  • Ask for location permissions.
  • Get users’ addresses by their current location (geocoder).
  • Get both latitude and longitude and set a pin on the map by typing an address on a certain radius (reverse geocoder).



You’ll be working on an application that allows you to keep track of the user’s current location by setting a pin over the map. Also, you can search for addresses by using the Places API.


Getting started

Download the base code here or you can clone the repository on Github instead.

Note: Since the project you’ll download contains the main view file already made, we’ll just focus on adding the functionality to our application.



This project already has added dependencies that we need to work through this tutorial; if you downloaded or cloned the project the build.gradle file has to look like this:

Mapbox dependencies

These dependencies are needed to start working with Mapbox SDK and build our application using view models, remember that we are going to use MVVM as our design pattern.


Step 1: Getting Mapbox Access Token

Before coding, we have to get a Mapbox access token; this token allows us to use the SDK on our Android application. According to the official Mapbox documentation regarding access token:

“To use any of Mapbox’s tools, APIs, or SDKs, you’ll need a Mapbox access token. Mapbox uses access tokens to associate API requests with your account. You can find your access tokens, create new ones, or delete existing ones on your Access Tokens page or programmatically using the Mapbox Tokens API.”

To get an access token you need to sign up to your Mapbox account or create a new one if you don’t have it yet. Once you are logged in, go to the dashboard page and copy the default token.

For this tutorial, it’s ok if you use the default token.

Mapbox access tokens

Copy the access token to your clipboard and go to your build.gradle file inside your app folder, here we’ll create a custom field at the end of the defaultConfig block; in this way, you can use this variable in the whole project.


Mapbox Access Tokens. buildConfigField


Step 2 – Ask for locations permissions

Open your AndroidManifest.xml and add the ACCESS_FINE_LOCATION permission. We’ll use this permission to access the user’s current location.



Since Android 6 Marshmallow is required to ask for permissions at runtime, we have to write some extra code for doing so, if we don’t do it, the app will crash because access to the current location is dangerous permission that needs to be approved by the user.

In this case, we’ll use PermissionsManager, which is a useful class that helps request permissions at runtime and it’s part of Mapbox SDK, so, you don’t need to add an extra dependency. 

Go to your MainActivity.kt and create a new nullable variable permissionsManager



Then let’s create a new function on which we’ll do two things; a) create a new instance for permissionManager and b) request for the current user location.



enableLocationComponent(loadedMapStyle: Style) is a function that will be called whenever our map it’s ready to be used, and we’ll pass the map style as a parameter, but we’ll see that later on.

PermissionsManager.areLocationPermissionsGranted(this) is a method that will verify if the user has already granted the location’s permissions. If those permissions haven’t been granted yet, we have to create a new PermissionManager instance and then request for those permissions; this process will display a dialog asking the user if he wants to allow the Mapbox App access to the location.


Mapbox App access


When the user hits on either DENY or ALLOW, the system invokes onRequestPermissionsResult() method, passing it the user response. Here, we would have to know whether the user accepted the permissions or not, but since we’re using PermissionManager, we don’t have to do that, the only thing for us to do is calling onRequestPermissionsResult() method from the permissionManager instance.

Override onRequestPermissionResult() method under onCreate() method



Finally, the last thing for us to do is adding PermissionListener to MainActicvity class and implement the two functions required: onExplanationNeeded and onPermissionResult



onExplanationNeeded() method will be called when the user has previously denied the permission and we have to show an explanation asynchronously without blocking the main thread. After the user sees the explanation, we can request the permissions again.

onPermissionResult() method will be called when the user denies or accepts the permissions; here, we’ll show a message when permissions have been denied, otherwise we’ll get the user’s current location.

Once you’ve finished adding permissions, your MainActivity class has to look like this:


MainActivity class


Step 3 – Setting up Mapbox

Next, we have to create variables that will handle the instance of our map:



Once you are done adding the lines of code above, you will see that getMapAsync(this) is throwing a complaining error, this is due to a listener that we don’t have implemented on MainActivity class yet; to solve this problem let MainActivity implement interface OnMapReadyCallback.




onMapReady() is a method that will trigger every time when our map is ready to be used, here is when we have to set the map style and assign the value of our mapboxMap variable.

Inside of the onMapReady() method add these lines of code:



setStyle() method needs two parameters, the style of our map and a callback that will tell us when the style is already available for use.

The available Mapbox styles are listed below; you can use your favorite style or the one that fits your project’s preferences. Also, if none of the default styles are what you need, you can create a custom style from the Mapbox dashboard.

  • DARK


Also, the Mapbox map has its own lifecycle the same as an activity, it’s important to add it because by doing this, we avoid any error or memory leaks on our application when using the map. 

We have to set up the Mapbox lifecycle by adding the next methods just under onCreate() method



We need to add just one more thing to run the application and see the map in action. To do so, Mapbox has to be initialized before getting it started, we can accomplish it by setting the next line of code before setContentView(R.layout.activity_main). This is important because if we don’t do that, we’ll get a RunTimeException




Once you’ve added this line, you may run the application. You have to see something like this:




Step 4 – Get the user’s current location

In order to enable the user’s current location, we’ll use Mapbox’s locationComponent method; this method can be used to display the user’s location on the map.

Go to enableLocationComponent(loadedMapStyle: Style) and set this locationComponent configuration under the if which we use for asking for permission.



activateLocationComponent method initializes the component and needs to be called before any other operations are performed.

We used locationComponent?.isLocationComponentEnabled method for enabling location updates.

locationComponent?.cameraMode camera mode determines how the camera will track the user’s current location, in this case, we set it not to track the user’s location, that way we set CameraMode.NONE. If you need another mode for your camera, you can visit the official documentation here, where the different modes for the camera are shown.

locationComponent?.renderMode we finally have the renderMode method, which is how the user’s current location will be rendered on the map.

We have already set the configurations for enabling the location component to our map, now we’re going to get the last known location and move the camera to it.

Add these lines of code just below the locationComponent?.renderMode = RenderMode.COMPASS line



With the code above, we’re getting the user’s location and creating a CameraPosition.Builder that we’ll use to move the camera map to the location found. Now, run the project again to see how this works!




Step 5 – Searching addresses

Note: For this step of the tutorial, we won’t explain how coroutines work and the way they were implemented. We’ll only see how to search for addresses and how to handle the Places API response.

First, let’s create the variables that we’re going to need; add them above of onCreate() method



Inside of onCreate() method, create the instance for the mainViewModel and then, let’s define our observe method as well.



Also, let’s create the adapter’s instance that will handle the address found.



The way the application works is as follows: on every character the user is typing, a new request has to be triggered in order to get the addresses that match with that letter or word; in that way, the Places API can give us results matching with the string that is being added by the user.



afterTextChanged(s: Editable?) is the function that we’ll use to send a request on every character typed by the user. There are two things that are inside this function

  1. Cleaning the search adapter when there isn’t any text on the edit text and hiding the addresses list.
  2. Starting the request using the string on edit text in conjunction with centerLocation.




When a user clicks on etAddress, we have to get the location that the camera is pointing at.

Let’s add a touch listener to the Edit Text so when a user clicks on the address field we’ll get the values of centerLocation.



Now, you are ready to run the application and start looking for addresses.


Mapbox looking for addresses


Step 6 – Looking for addresses near the user’s current location

If you run the application and look for an address, you can see the response is returning addresses far from your location or even from another continent; this behavior is completely fine and it’s because we haven’t configured the Places API, which just looks in a certain area by default.

Go to the MainRepository class, then navigate to the getMapboxGeocoding() method, here we’re creating a MapboxGeocoding object needed to search for addresses; this object must have two required parameters, the Mapbox access token, and the location query. 

To start looking for addresses near the user’s location you can use the proximity() method, passing in the user’s location as a Point object to bias results to around the location.

First, let’s validate if centerLocation isn’t null, to prevent the application from crashing due to a NullPointerException.



Your getMapboxGeocoding() function has to look like this:


getMapboxGeocoding() function


Step 7 – add a marker

Once we get the list of addresses, we’re going to print a marker over the map when a user clicks on an address. 

On the latest version of Mapbox, for drawing any visual property over the map, we have to create them as Annotations.

According to Mapbox’s documentation, Annotation simplifies the way to set and adjust the visual properties of annotations on a Mapbox map;  “annotations” means circles, polygons, lines, text, and icons that we, as developers, can draw over the map.

For this tutorial, we’ll use the Symbol annotation. First, in order to create a new symbol, let’s create a symbol manager, add this line just above the onCreate() method.



This manager will let us create symbols over the map with specific methods and properties that we can use to configure every symbol.

Next, let’s create a new function in which we’ll create the instance of symbolManager.



Now, go to the onMapReady() function and inside the setStyle block add initMarkerIconSymbolManager function. When the map is ready to be used, a new SymbolManager instance will be created, and we’ll be allowed to set symbols on that new map instance.


map instance


Next, create an addMarker() function that will receive a Point object as a parameter; inside this function, we’re creating a new SymbolOption object, this object will let us define the new symbol that will be created.



Also, add this function, it will work to hide the keyboard once the user selects an address from the list.



Finally, override the onSelectAddress() method of the SearchListener that belongs to the SearchAdapter.



And that’s it! Now, run your application, search for an address, and select one of the options on the list.


Mapbox gif



You just learned the basics of adding Mapbox to your project! I hope that you enjoyed this tutorial as much as I did. I invite you to share this tutorial so more people understand Mapbox. If you have questions, suggestions or you think something is missing in this tutorial, or you just want to share your thoughts, leave a comment below, I’d appreciate the feedback. Happy coding!


Cover image: Bobby Sudekum


0 1 Continue Reading →

Remote work in pandemic times

One of the best things about working in the IT industry is that we are in a state of constant digital transformation, which means that we are able to use technology to adapt our processes and meet the high standards of a fast-paced world. 

It has been 7 weeks since we started to work entirely remotely at Tango, following the government’s recommendations to avoid the spread of the virus, but that doesn’t mean we are new in this game.

Just to add context, Tango has offered the possibility to work remotely, even before the pandemic situation. Consequently, we have been improving our process to manage distributed teams for the last decade. We know transitions to working remotely can strain your internal performance at the beginning if you’re not used to it, but it can help you get ahead of the curve.

You should be prepared for distributed teams, with or without the pandemic, for various reasons like increased productivity (35-40% more productivity), better quality (40% fewer quality defects), higher engagement rates (41% lower absenteeism), and higher profitability (21% higher) (Forbes, 2020). All of these reasons are driving companies to go remote.

Given the benefits and the current situation, organizations are preparing to invest in remote teams more than ever, so we need to find a way to train ourselves to dive into this not-so-new way of working.

Here are 5 points that  I’d like to keep in mind each time that I need to work with distributed teams.

1. Trust isn’t negotiable: 

According to the research report “Why Trust is Critical to Team Success by Reina and the Center for Creative Leadership, trust is a must for you and your team. By building trust you can:

  • Deepen the engagement of your talent
  • Foster collaboration
  • Drive change

“Trust building helps teams step into the ambiguity, to stay committed to managing the unknown with confidence, and to embrace change as an opportunity to learn, grow, and do great work together.” (Rina, PhD, Reina, PhD, and Hudnut, MIA, 2017)

2. Communication is key:

The lack of communication can lead to the worst outcomes. N. Sharon Hill and Kathryn M. Bartol wrote an interesting article about communication that is worth giving a try: Five Ways to Improve Communication in Virtual Teams.

The book “From Chaos to Successful Distributed Agile Teams mentions:

The first principle for successful geographically distributed agile teams is to establish acceptable hours of overlap. The Key is enough communication time and sufficient communication tools.

According to Johanna Rothman and Mark Kilby, authors of “From Chaos to Successful Distributed Agile Teams.”, an agile team requires a minimum of four hours of overlap a day for sufficient collaboration. Teams without the possibility of adequate hours of overlap can avoid chaos if they follow an organized agenda.

3. I need a committed team:  

This is important to keep up with the Continuous Delivery model we are following at Tango. I need to know I can trust my team will get the job done.

Increasing commitment involves the two aspects we previously mentioned: Trust and communication, according to “The Five Dysfunctions of a Team (Lencioni, 2019), also adding the importance of healthy conflict, which drives commitment to decisions, avoids an environment where ambiguity prevails.

Lencioni mentions in his book how the lack of commitment can lead to the avoidance of accountability, which results in inattention to results.

4. We should be looking for continuous improvement:

The Institute of Quality Assurance defined continuous improvement as “a gradual never-ending change which is focused on increasing the effectiveness and/or efficiency of an organization to fulfill its policy and objectives”.

This definition does not exclude the growth of your team, and that should encourage you to seek the improvement of your organization by the gains of your team members.

A way to keep teams motivated is by showing them that failing is part of the learning process and success comes with a lot of failures. No one is going to hit you if you break the staging server; just rollback and check what happened, taking on the responsibility to determine what you missed. 

5. We should follow a methodology:

We decided to follow the Scrum methodology since it has worked the best for our team, but you can choose any methodology you think it’s most appropriate for your team. 

Using a methodology will allow you to understand your processes better, know your limitations, be consistent across your projects, and it will give you consistent metrics and expectations.

In case you are interested in the Scrum methodology, let me share our routine.

On Monday, we start with our Sprint Planning meeting, adding a quick standup to check what the team did during the last working day. All work must be ready and without missing details. The whole team follows the Definition of Ready and the Definition of Done, so we don’t have chunks of non-working products. We run 2-week sprints, following the Scrum methodology.

We have a standup from Tuesday to Friday and don’t use a camera at all unless it’s our Retrospective meeting (We like the feeling of privacy in our teams), and when we do have our Retrospective meeting, team members feel motivated to turn on the camera, so we all interact. 

Here are some practical tips I like to take into account when I approach any of my team members: 

  • Before calling my team, I ask myself: “Do I need to call them? Could a message or email be enough?”
  • Icebreakers to start, don’t take too long, but that would make any meeting run more smoothly.
  • When messaging your team, include the reason for the message right after the salutation. This is not rude, people’s time is essential, and just because you combine the greeting and a question doesn’t mean you don’t care about the person.
  • Agree on the best time to have a call. Unless it’s an emergency, it’s better to have an arranged call than to cold-call your team members.
  • If you create an event, always add the topic, so that everyone is aware of the discussion.
  • Be patient. We all have to deal with bad internet connections, interruption by our pets, family, and other external situations.



Remote work is no longer just a perk, in fact, it’s more like a way of living. Let’s be more intentional about how we work and show that we can still be productive while adding value to the work we do while being a remote team member. 

Productivity is a top concern for companies, but we found that working remotely actually makes us more productive overall. So, if you are in the process of transitioning to a distributed team, you can start by implementing some of the tips I shared here. 

As an additional benefit, remote work has allowed me to join worldwide teams that share work-related ideas, workshops, and helped me to continue growing as a professional.

Have you observed any similar situations in your remote working experience? Let us know in the comments down below.


0 1 Continue Reading →

Test Automation Strategy

The search for quality in a software product is the most important activity after the creation of said software and yet, there is so much we have to learn and to define. As time passes, the industry has also come up with a variety of ways to optimize testing tasks, one of them being testing automation.

A misconception is that UI Test Automation would suppress the need for Manual Testing and reduce development costs in a short period of time; far from the truth, manual testing is still a very important (if not the most) part of the software development process and will continue to be so in the foreseeable future.

The Problem

A project without an automation strategy would suffer from delayed releases, fragile functionalities and low morale among the team members. In order to implement test automation successfully, we have to identify problematic areas, below is a list of the most common ones:

• Test Maintenance:

Even though UI Automation can save time and money, it is also expensive to build, maintain and execute compared to other ways of automation such as API or Unitary tests.

• Test Flakiness:

Flakiness means unreliable and fragile tests that either don’t always return the same result when given input or suddenly fail due to changes in the project’s implementation. Tests based on UI comparison are the most affected to flakiness due to their reliance on a given configuration of the user interface which tends to change the most.

• Data Management:

A data management strategy attempts to organize the inputs of our project under test, it can be categorized as real data, generated in a controlled environment or randomized data. When the team lacks a proper strategy, test execution history can become useless for predicting future performance and it also makes it more difficult for developers to fix any failing test.

• Tool fragmentation:

In recent years, the progress made in the field of Artificial Intelligence has spurred a variety of new tools that attempt to fix the toughest challenges for test automation. Selenium no longer has the spotlight it had regarding test automation, proprietary and open-source alternatives now compete for the market.

• Unmotivated Testing team:

We tend to find testing experts that are not interested in automation as a career option, which is usually the reason that automation of UI requires programming skills.

Proposed Solutions

Having a test strategy in place will help us fix long term problems without losing sight of the day to day tasks.

• Test Plan

Write a document describing the test strategy and activities the team will follow throughout the development process, including tasks executed at an early stage, like document testing, as well as post-release actions like crash report gathering.

• The Automation Pyramid

It’s a widely accepted theory regarding test automation that distributes the different tests in 3 levels depending on the cost of implementation, maintenance, and execution. These three levels are Unitary Testing, API Testing and User Interface Testing, all of them part of the Automation spectrum, being Unitary the less expensive and therefore the most numerous going up through API Testing which increases in cost up to User Interface Testing, this last category is the most expensive of them and it’s recommended that less test are developed on it.

• Define your test data use cases:

Create data profiles based on use cases and define expected behavior that your tests will verify. Define when the use of Randomized data can be of benefit for the project.

• Tool Selection:

Keep it Simple, select the tool you’re most familiar with (via usage or recommendation) that covers the most important requirements (test implementation, CI/CD integration and reporting), avoid hazarding your project with newer solutions that are still to be tested by the market.


Finally, keep in mind that software testing is a still-growing discipline, there are many styles and interpretations on how to achieve product quality. So keep an open mind and apply the techniques that better fit your project needs.




0 0 Continue Reading →

Memoization: Faster Ruby app

The performance impact that your code could have in a certain programming language and/or platform is often overlooked, which might make you prone to blame those tools, instead of analyzing what and how your implementation made your application slower. Memoization may help you to improve your application speed with a non-significant impact on readability or flexibility on your code when appropriately used.

What is Memoization?

Memoization is a technique where you can cache a process result in order to use it later without the need to run that process again. This has the potential to save a massive amount of computing/networking power, thus improving the general performance of your application.

How does it work in Ruby?

You can memoize a value in Ruby using instance variables because they live long after a method has finished.

And here comes the magic of Ruby: the ||=  (or equals) operator. Since instance variables won’t rise an exception when accessed before being declared, you can use the “or equals” operator:

Here we are saying to Ruby:

“If you don’t have a truthy value for @value variable, then assign it the value at the right”

With this knowledge you can extrapolate it to an instance method:

As long as user instance reference exists, that value will still be on memory.

This is a simple example, but what would happen if instead of a string value, that assignment made a database query? Or an API call? A heavy object creation?

Here’s an example of a dummy user model with a dummy database connection object:

How to update cached values

There’s not a “clean” way of updating a cached value, the only way is to directly override the instance variable like so:

Keep in mind that memoized values’ ideal use case is for never-changing values.

Formatting and styling

When you need to memoize a certain not-so-straightforward value, you tend to write code like this:

Here you need to pay special attention to what each line of code does: 

“Did you finish the routine early because a certain value is not present?”

“Is this just an assignment method?”

You can certainly rewrite the code so it removes that unwanted cognitive load from you:

This will work; it does what it says it does, but now you have created a rather verbose method to encapsulate an obvious task.

Finally, you can rewrite it, as this is taking advantage of a ruby block:

This way, the code clearly expressed its intention, and you didn’t need a special method for it, you just used a block that does what the previous calculate_value method did. Now, within the same method, you read: 

“Oh, this is a memoized value”
“…And here is the source of that value”


As you may see, Memoization can improve your application’s performance, and the greatest thing about it is that it comes with a little to no-downside on your code readability.

Remember, to use it properly, you need to find the kind of calculated values and/or reused objects which are being processed expecting the same result over and over again in your application, then memoize them and enjoy your milliseconds of saved time!



0 5 Continue Reading →

A thoughtful insight into why Redis


It’s common for us developers or technology enthusiasts to encounter technology that we can use right out of the box without really knowing what that thing actually does. It was not long ago when I was first introduced to Redis, this magical thing that solves all sorts of problems. 

I was working on a Rails application when I first heard about Redis, but I wasn’t really trying to work with Redis per se, I was more likely trying to make Sidekiq work; a background processor for Ruby that needs Redis in order to work as I had always been expecting things to work, out of the box.

If you are not familiar with background processors on web applications, it is basically a function whose result is not imminent and can be postponed; this is because the end-user can live without immediately knowing the result, such as resizing their profile picture, or a more commonly known example, when you upload a video to Youtube and it tells you to come back later for the result.

The basics

Using the same example above, let’s think about how we can make this work in case we wanted to build our own solution for background processes. The best and most simple solution would be:

1. During a request, identify what information should be delivered ipso facto and what can be processed later.
2. Store information about what can be processed later, of course, the obvious answer here is to use our database, whose only job is to do so.
3. Run a different process than our main application (the one receiving the request); it will be monitoring the database in case something needs to be processed.
4. Process whatever needs to be processed.
5. Profit.

In case you are wondering, as I was wondering at some point: what is this process that will be checking for background processes in the database? Well, it will be most likely another instance of your main application, but configured to run without a web server listening for requests, and instead, a routine to check for new background processes to run, this way the worker (main application, without the fancy web server) will have access to all the goodies you have already written to make your life easier, models, serializers, etc. Or in the case of Rails developers, let the magic flow to our worker.

The problem

Ok, so it seems like we have the solution here, so why bother to even think about an alternative for any of the 5 easy steps to build our worker? Time will be wise and when you reach your millionth user, you will notice that your database is starting to have problems reading and writing all the information your inconsiderate users want your application to store and retrieve for them, and on top of that, the worker is always trying to access the database in hope of finding more background processes to run; after all, the database is the only source of truth for knowing the next background process to be run.

Let’s say that at some point, retrieving a row from our background processes table takes x amount of time; a rough explanation of how our database manager is accessing the data will be the following:

1. It listens for a query request through the configured port.
2. It processes the query to know what to fetch from disk (a more complete explanation of this step can be found here).
3. It asks the OS to read certain sectors on the disk.
4. It reads the sectors from disk.
5. The OS returns the data from the disk.
6. It stores the rows in memory.
7. It sends the information back to the requester.

The key steps here are 3-6. If the database manager wants to access data written in the disk, that petition needs to be made to the Operating System in order to access it. And not only that, we depend on the availability of the disk’s reading capabilities, which can vary depending on the hardware used in our server. For the sake of simplicity, let’s say these 4 steps take 4/7x of the time necessary to return the requested information. So if our request takes 1s to be completed, almost 60% of that time was used to retrieve information from disk and be stored into memory. Wouldn’t it be better if we could have that information already in memory from the beginning? Well, the time you had been waiting for is here, I present to you: Redis.

The solution

Redis is, from the front page of its site:

“an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker”.

The secret sauce here is: “in-memory”. Redis always stores the data in-memory (RAM) so the response time of any petition can be lightning-fast and can be accessed in the same fashion as a normal database, such as MySQL.

Redis also has the capability of doing a backup of what it stored in memory to disk, so in case you need to restart the machine running the Redis service, you won’t lose anything, as long as the Redis service finishes to do this backup. After the restart or in the case of a failure, when the Redis service starts again, it will populate the in-memory database with the backup made into the disk.

How it works

You won’t believe the simplicity of Redis, it is just, wait for it… a dictionary (a map if you will), so it is a concept really easy to grasp. You basically set keys with a certain value, and that value can be from a simple string to a member of an ordered set. Redis has a really impressive catalog of structures that it can manage; a complete list of these structures can be found here.

Every structure has its own set of commands, for example, to set and get a simple string from Redis, you will do something like:




Just as our first example, we can use Redis to store what our worker needs to start running background processes, in a sense:

1. Our main application writes into Redis a descriptor of the background process.
2. Our worker reads the descriptor and runs the background process.

Another interesting use of Redis is when a third-party service we are using works using authentication tokens. Basically, you make an auth request to this third-party service, and it will be returning a refresh token and an authentication token that will work for a certain amount of time, let’s say 1 hour. So, where should we store these tokens inside our app? You guessed it, Redis; so that it can be accessed later for other requests, or even be used by our worker.


There are limitless situations that will fall in the Redis use cases. As always, in technology, Redis is not the magic solution for everything, experience will tell you if Redis is a fit for what you are doing. And of course, you can always create your own solution using Redis, but I bet there is at least one project out there that will fit you just right, but if you are really passionate about a deep understanding of how things work, go for it, in the end, practice makes the master.

I hope you find this information useful to make an informed decision in the future, and feel free to comment here what your results were when using Redis, or if you have a cool alternative to it.


0 1 Continue Reading →

How to be a better developer without coding?

Entering a new company is always a challenge, a challenge that not only entails testing your technical knowledge but also how you interact with your co-workers.

Talking with friends and fellow programmers about their experiences when entering a new company, I gave myself the task of delving a little more into a specific topic “What qualities do you consider positive in a co-worker?” And considering his answers, I made my best effort to take the qualities that they mostly emphasized on as positive aspects of a co-worker. These capabilities are named “Soft Skills” according to Sophia Bernazzani.

What qualities do you look for in a co-worker?

It seems like an easy question to answer and I know that many will agree that some of the most important qualities of a developer are:

  • Creativity
  • Logic
  • Discipline
  • Knowledge

In my opinion, these qualities must be intrinsic in a developer, so the additional skills to these four are the answer to this question.


As developers, every day we face difficult situations or beyond our control, knowing how to anticipate these situations and having an action plan before them is the best feature of proactivity.

Pavneet Singh Saund says that “Proactive: create or control a situation instead of simply responding to it after it has happened“. Keeping this in mind, having a proactive teammate is so positive since that ensures that even if there is always a storm, there will be a person who brings calm and an efficient action plan against the problem or problems that arise and in turn will always try to be one step ahead of the problems.


As Zachary Paruch says in his article, “Empathy is typically associated with being able to put yourself in the place of someone else”. Taking this into account, we can understand how a newbie feels, because we were all newbies once, that’s why we often feel the need to help them in the way we would have liked experienced teammates to help us.

This desire to help simply by having experienced similar situations is the exact definition of what Zachary Paruch means in his article, and it is why I propose putting it as a quality that a teammate should have for the benefit of the entire team, this, because supporting teammates help them grow as developers and makes workflow more fluid and less stressful among their members.


Albert Einstein said “I have no special talents. I am only passionately curious”. This curiosity led him to be a legend among physicists and was a quality with which he stood out in his field. Curiosity is a quality that we should awake as developers as it helps us to always try to ask ourselves why the code works, and not simply know that a certain method is useful for this or that, and also, if I write something, then another thing may result.

Another advantage that curiosity gives us is the fact that when we see that new technologies are born, a hunger grows in us for learning and knowing about them, as well as mastering the technologies we already knew before.

Curious teammates generally have good learning habits, and good habits are contagious.

Seeing someone with that hunger to learn, that hunger to master new technologies and that desire to grow as a programmer is not only motivating but also extremely useful because generally, this kind of teammates have something useful to comment and they are usually people who like to share what they learn, which is always beneficial in a team since all the members learn only by listening and taking as an example the curious teammate.


Teamwork makes the dream work” is what Liz Chatterton says in her article

Working as a team seems to be an easy task, but knowing how to deal with people with different ideologies, ages, and skills, becomes a challenge that we all have to go through. Mastering teamwork and knowing how to interact with people who are part of the team makes you a person with whom you can gain trust faster, and in a team, trust is one of the most important things since it not only makes working with you more comfortable, but it also helps your colleagues dare to comment or ask something about the job without fearing to be told something negative about it.

Remember that “A comfortable job is a dream work”.


Doing what we like should always be our main objective. Motivation denotes passion for your work. When you like what you do, it is shown from the first moment. Having a motivated partner is something contagious, a good attitude and desire to do your job makes the whole team reach a unique tuning. If you are motivated, developing the other skills described in this article will be easy for you. Remember how James Clear defines motivation “Motivation is a powerful, yet tricky beast” but with enough effort, discipline, and above all motivation, you will improve yourself bit by bit. 


In conclusion, to be a better programmer, your Soft Skills will also be of the utmost importance. How you handle yourself in a team is going to say a lot of your value as a teammate. Knowledge and natural talent are not everything; how you treat your teammates and how you behave in different situations says a lot about you.

The points presented in this article are just some of the points that I could rescue from my experience and different conversations with fellow programmers. If you have any point that you want to extend or add, do not hesitate to leave it in the comments and I will see how to add it to the article.

0 0 Continue Reading →

6 reasons why you should stop using Java and use Kotlin instead


As an Android developer who started working with Android 4 years ago, I had to learn Java in order to create native applications. For the first two years, I learned a lot about Java and I started to feel that I was getting good at it. During this period, I heard you could use Kotlin to create Android apps but I always thought “There’s no way Google would deprecate Java, it’s their main language.”

A year passed, and first-class support for Kotlin was announced at Google I/O 2017. In 2018 Kotlin was voted as the second most loved programming language (StackOverflow).


Most Loved, Dreaded, and Wanted Languages

StackOverflow. (2018). Developer survey 2018. [Image]. Retrieved from


Additionally, according to Pusher in “The state of Kotlin 2018”, after the official announcement, the Android community has been migrating from Java to Kotlin rapidly.


Kotlin’s growth
Pusher. (2018). The state of Kotlin 2018. [Graphic]. Retrieved from


A few months after Google I/O 2017, I was still using Java because I thought Google would not deprecate it, and you could still use it to make apps, so I thought Kotlin was in its “baby steps”.

Even when I kept thinking I should wait a little bit more, I did not want to lose an opportunity to learn a new programming language, so I decided to experiment a bit. I started reading articles, blog posts, and following tutorials. After a while, I started to get bored because I was only doing tutorials or easy examples of Kotlin. Following tutorials are a good start, but it only shows you the basics of a certain topic, and I really wanted to apply all the new things that I had been learning to a real project. My prayers were heard and I was assigned to a new project in which I had to start from scratch, so I took the risk to use a new language in a real project and learn on the way.

To my surprise, learning Kotlin was unexpectedly easy thanks to my Java background. If you’re asking yourself, “Should I use Kotlin instead of Java?” Well, yes, you should! But, why?


Here are some advantages of Kotlin over Java that to consider:


1. Say goodbye to NullPointerException

The NullPointerException is the most common exception for a Java developer that Kotlin wants to eliminate by using Nullable types. A nullable type is an object that can be either null or not, and it is defined by the question mark (?). 

If you try to access some property of a certain String, e.g., the length of b, the Kotlin compiler won’t allow you to compile your code because it detects a prominent NullPointerException (NPE). In order to know the length of a nullable string, you have to tell the compiler that it can access that property only if it is different from null by adding the question mark.

The main benefit of using nullable types is that they give us the chance to avoid an unexpected NPE before our app crashes in runtime.

Nullable types provide this benefit due to Kotlin, which detects a possible NPE and Android Studio won’t let us compile our code until we fix it, ensuring that our application is null-safe. However, you may keep in mind that Kotlin does not solve the NPE issue, but it forces you to prevent this exception when your app is running.


2. Extension functions

An extension function is a way in which you can add new functionality to an existing class without having it inherit from the class. For example, if you want to remove the last character of a String:

3. Reduce boilerplate 

Boilerplate is the repetitive code that you see in almost every place of your project. A really cool example to identify a repetitive code is a POJO class in Java.

You can create the same POJO class in Kotlin by defining a data class. 

4. Data class

The example above is using a data class; we can use a data class when the main purpose of the class only holds data. Likewise, it gives us the chance to avoid adding unnecessary code to every class we need.


5. Interoperability

Kotlin can work with Java in the same project without any trouble because both languages have similar bytecode structure. If you are migrating your app or you want to add a feature (built with Kotlin) in your existing Java application you can do it.


6. Android Studio IDE

Android studio has grown a lot, and its compatibility with Kotlin is excellent. Also, it has a feature which lets you convert Java code into Kotlin code by just doing a simple copy/paste, or with a few clicks. This is a useful advantage if you are considering migrating your Java app, but keep in mind that you have to be careful because you are going to use Android Studio to migrate your full code. I prefer to use this feature only to convert little pieces of code.


It’s not all a bed of roses.

Kotlin is not a perfect language. In fact, it has cons that you may be interested in knowing before choosing it as your main language to develop Android applications.

Learning curve if you are not a Java developer

I mentioned that if you are a Java developer, the learning curve is effortless but, it may be difficult if you are learning on your own without having an expert Kotlin developer helping you or without any Java experience.

Speed compilation
When we talk about speed in compilation time, Kotlin seems to be slower than Java in some cases when trying to perform clean builds.

Find a Kotlin expert
There are a lot of Kotlin developers available in the market, but if you want to find an experienced mentor that helps you improve your skills, it may be a difficult task.

A few learning resources
As I mentioned before, the Kotlin community has been growing fast, but even so, finding a good learning resource could be difficult if we compare it with Java.

If you are interested in learning Kotlin but you don’t know how to start or where to start, you can visit these links to learn more about it:

  • The official Kotlin page is the official documentation that Kotlin gives us. It is easy to read and it has a lot of good examples.
  • Android code labs are a bunch of awesome tutorials made by Google developers.
  • Android Weekly is a free newsletter that helps you to stay cutting-edge with your Android Development skills. Every week you will receive an email with several topics, tutorials, and posts about Android development.



Everything about Kotlin has been fascinating. I have encountered zero issues with this language or any implementation that I want to add to my projects. I have been able to solve every challenge I have faced by using Kotlin, without the necessity of adding any Java to my code. 

I do recommend you look for guides when starting to use Kotlin, especially if you’re not a Java developer. Getting help could fast track your learning. Receiving tips, getting feedback, or maybe being told the solution you implemented is not the best; every padawan always needs a Jedi that guides them to the light side of the force. 

If you are still using Java, my suggestion is to try Kotlin. If you have been following Google’s I/O since 2017, you have noticed all examples about new implementations and tools are using Kotlin. If you are a beginner and looking for a language to learn, choose Kotlin and do not worry; there is a lot of good information about how to start.

0 7 Continue Reading →

Test cases; an asset or a hindrance for the QA testing process?


This blog post aims to address a controversial topic within the QA tester community: test cases. When I first read about this topic, I was wondering whether test cases are actually an asset or a hindrance to the QA testing process. The following paragraphs are meant to help us distinguish between the advantages and disadvantages of different test cases. 

On one hand, some people say that it is essential to design a test plan and have the whole list of test cases before starting to test. These people consider test cases an asset. On the other hand, some other people say test cases are actually a hindrance to the QA testing process since the tester would then be biased, thus limiting creativity by focusing on the test cases.

First, I am going to present the information supporting the idea of test cases being an asset. 


Test cases are an asset


As we know, there are some projects which lack documentation for different reasons; given the circumstances, these teams cannot provide new teammates with the proper information to understand the product. In this case, new people on the team can use test cases as the documentation they need in order to understand the project.

Other people consider test cases important as it shows stakeholders and/or clients the parts of the system which have been tested; in other words, what testers have worked on. From this perspective, test cases are such a nice tool for the QA testers to justify their work.

Moreover, writing test cases before running any tests is also considered good practice so that future testers are able to run the necessary tests once the person who designed the tests is no longer in the project.

For other testers, creating the test cases before even getting their hands onto their new product is great to learn the product. By executing a set of tests, they will get a pretty good hands-on experience without having to use their peers’ time to give them a tour of the product.


Test cases are a hindrance


For instance, according to James Bach and Aaron Hodder in their article “Test cases are not testing: towards a culture of test performance”, testing cannot be predicted for two reasons, on the one hand, there is always more testing to do than what we can afford to do, and on the other hand, we don’t know where the bugs are until we find one. Creating test cases before testing may bias the tester.

Similar to what happens with recipes, which are not cooking, we need to bear in mind that a test case is not a test, and should avoid using an artifact as the basis for human performance; also keep in mind that with tacit knowledge and skill, the artifacts are not central, and they might not be necessary whatsoever. So the test case may have a role, but the tester is indeed the center of testing.

Taking the example of the recipe into account, the performance of the tester should have substantial freedom to make her own choices from one moment to another. Having said this, if a tester has a test case, it is hard to make decisions at the moment of testing because the tester is then biased by the test cases being followed.

Test cases do not cover as many scenarios as a tester would like, for several reasons. A test case, most of the time, can cover the happy path and the most common paths for a feature, but they lack focus on elaborate scenarios, tricky paths a normal user might follow. Having test cases biases the tester. Testers are then tricked into focusing on the scripts, rather than focusing on learning about the product, or finding bugs. They may execute the test cases to the letter, and they can perfectly carry out this activity without finding any defects, which is the main objective of a tester, not to have bugs on the product.



Based on my experience regarding test cases and whether they are convenient or inconvenient for the QA testing process, I do have my own take. In the end, I believe it all comes down to being open to the needs of the product, meaning there will be some times when test cases will be quite useful, but some other times, it’ll be better not to stick to a document. After all, testing is more about how testers perform, rather than the test cases per se.

For instance, if regression testing will take place, the best approach might be to use scripts so the tests are carried out straight forward. If the testing is not about checking the main functionalities in a product, the following scripts might not be the best approach since the tester should be free of biases, the product should be explored in great detail. Important to take into account the fact of keeping documentation once testing is done, even though there are no test cases, the tester should keep notes of the results, for further situations.




0 0 Continue Reading →

How to conquer legacy code and not die trying

As a software engineer, I know how frustrating it can be to work with legacy code, especially when your client has no idea of the level of technical debt you are inheriting, and wants you to deliver bug fixes and new features as soon as possible. If you’re as passionate about software development as I am, you’re supposed to enjoy it, not hate it. That’s why I’m writing this blog post: To share my experience and a key piece of advice about how to deal with it.

The most common issues of working with legacy code are:

  • Having no tests at all, or no useful tests.
  • Outdated dependencies.
  • Poor software architecture.
  • Technical debt.
  • Lack of documentation.

Here are some recommendations about how to deal with it and not to die in the attempt.

Risk Assessment

After performing this assessment, you’ll be aware of all the risks you’re taking (or inheriting). In the end, you’ll have to decide how comfortable you are with the given risks. Therefore it’s super important to know the current state of the project and to be aware of what to expect when it’s time to get your hands on the code.

The goal of this assessment is to learn as much as possible of the current codebase. My recommendation is to focus on the following:

  • Documentation: Unfortunately, most of the time the only documentation that you might find in an existing project is the README file, and even worse, this file is usually not up to date. Look for architecture diagrams, wikis, or any other documentation that can help you to understand the codebase at a glance.
  • Test coverage: Speaking of documentation, tests are considered the best code documentation. You should check for the current test coverage, but more importantly, check the quality of the tests. Sometimes tests check against specific text instead of testing the business logic that drives that text to be displayed. If there are good quality tests, you should be able to have a better sense of the business logic, and moreover, the current architecture.
  • Dependencies: Having a ton of dependencies is not a good sign. I always try to avoid adding a new dependency unless it’s strictly necessary, or the benefits of said dependency exceed the cost of making it happen. Check how outdated dependencies are and how difficult it would be to do upgrades. Pay extra attention when it comes to deprecated versions and you need to do major upgrades.
  • Deployment process: A new feature, bug fix, or improvement is not considered done until it reaches the production environment. You need to understand what the deployment process is since you have to take this into account while giving estimations.
  • Backlog: After getting a little bit familiar with the codebase, you need to know what’s about to come. You might be asked to add a particular feature that might not be that easy to add given the current architecture or the version of the dependencies implemented. The value of knowing the backlog beforehand is to raise any warning flags in the early stages so that you can plan.After going through each of the above points, you should be able to communicate any risks to your client. Set clear expectations and decide whether to move forward or not based on your discoveries.

Buy insurance

If the test coverage isn’t the best, you should ask to work on adding tests before adding new features. You need to be confident enough that you’re not adding new bugs while changing the codebase, and the only way to be sure is to have a proper test suite. Good test coverage is your insurance as a developer.

Refactor on the go

I know that you might be tempted to do a major refactor as soon as you start touching the codebase; however, based on my experience, that is not always the best option. A big refactor can take forever; it’s like a chain reaction. You start with a few files or lines of code, which then scales so quickly that you’re suddenly in a situation where you’ve already refactored hundreds of files with no end in sight.

A better option is to do small refactors on the go. Refactor the code you’re touching based on the features you’re working on.

The approach that I like to follow in this case is one of the SOLID principles, Open-Closed principle, which suggests that a class should be open for extension and closed for modifications. If you can’t extend an existing class or file, then create a new one following this principle, and use it only when it’s required. Avoid changing existing implementations since it can scale quickly, and you might get lost in the refactoring instead of focusing on delivering the feature.


Dealing with legacy code shouldn’t be that painful. It depends on your approach to working with it.

Here are the things that you should do before starting the feature development:

  • Read the existing codebase.
  • Analyze the current backlog.
  • Add tests until the point you feel confident enough.
  • Set clear expectations to your client.

Once you’ve already started the development process, these are the things you should bear in mind:

  • Avoid major refactors; instead, do refactor on the go.
  • Add tests to any single feature or bug-fix you work on.

Let me know your thoughts in the comments section. If you have any other suggestions about how to deal with legacy code, feel free to share it with us.


0 5 Continue Reading →