Gabo Esquivel - Javascript Developer Lean Development, DevOps, Cloud and OSS. JavaScript (Node.js)

Best Practices for Designing RESTful APIs

An application-programming interface (API) exposes functionality of a software application for other software clients to use. Through APIs applications interact with each other and share data without any user knowledge or intervention.

Modern web applications typically have RESTful JSON APIs. REST stands for Representational State Transfer and it is a software architecture style consisting of guidelines and best practices for building scalable web services. JSON stands for JavaScript Object Notation and it is a minimal, readable format for structuring data.

When designing an API there are important decisions that have great impact on the way other applications will interact with the service. Once an API has been defined and other software clients make use of it, changes to the API are costly and should be avoided. By following standards and best practices you reduce the need to API changes to a minimum.

Automatic Node.js Version Switching

When working on multiple Node.js projects it is important to configure your development environment to automatically switch to right node version for a particular project. You can automate this task on many different ways. I opted for using using a module called avn that works both with nvm and n for automatic Node.js version switching. In my case I use nvm as my version manager. In order to achieve automatic version switching with avn you need to add a .node-version at the root of your projects specifying the node version required.

After installing avn, when you cd into a directory with a .node-version file, avn will automatically detect the change and use your installed version manager to switch to that version of node. If that node version is not available you have install it, avn won’t try to do so, it will only attempt to switch to that version and notify you if it is not available on your environment.

It is important to mention that as good practice you should always specify that node version on the package.json file on the engines attribute "engines" : { "node" : "0.12.7" } }

UPDATE: March 25, 2016
Kikobeats just released nodengine which does exactly the same but it reads from the engines field in the package.json

On Continuous Delivery

Continuous delivery is practice in software development in which development teams work in a way that allows companies to update their systems at any point in time or continuously through automated processes, the system’s code base is always deployable and tested.

Why is it important?

Continuous delivery is a more efficient way to build software as it enables a team to get constant feedback on the application’s changes and updates, allowing you detect problems early and consequently improve quality, reduce costs and deployment frictions. This constant feedback also gives the team a realistic view of development progress instead of relying on perceptions.

Continuous Delivery gives a company the ability to react quickly and respond to change. Having fluid a process of software development allows you to make changes on your strategy more easily and rapidly.

Films and Documentaries Worth Watching

This is a curated list of films and documentaries related to the internet, programming and hacking. If you are a web developer or consider yourself a problem solver you will probably enjoy them. These films contain historical and philosophical content on subjects related to programming, the internet, the evolution of human consciousness, activism, social action, environmental causes, open source and free software movements.

Unit Testing: Mocks, Stubs and Spies

In unit testing isolation is key. The class/object/function you are testing is called the System Under Test (SUT) and the SUT often interacts with other part of the system, these parts are called Collaborators or Depedencies. When testing the simulation of the collaborators/dependencies and behaviors of these dependencies allows you to to test the units in isolation. Gerard Meszaros author of xUnit Test Patterns uses the term “Test Double” as the generic term for any kind of pretend object used in place of a real object for testing purposes. The name comes from the notion of a Stunt Double in movies.

Mocks, Stubs, Spies, Dummies and Fakes are types of test doubles that will help you to accomplish the goal of isolation. There are several libraries that provide tools to easily create these objects in your tests. Sinon.js is a javascript library that provides standalone test spies, stubs and mocks with no dependencies that work with any unit testing framework.

Software Unit Testing

Testing a web application is critical to ensure the program does what is supposed to do and that new functionality and changes don’t brake existing parts of the application. Well-tested applications are more easily extended.

Testing can be defined as:

Taking measures to check the quality, performance, or reliability of (something), especially before putting it into widespread use or practice.

“Oxford Dictionary”

There are 3 main levels of testing and they are complementary:
Scenario Testing / End-to-End Testing (E2E) : test the whole application by pretending to be a user.
Functional Tests / Medium Level Tests: a piece of functionality is tested in isolation, by simulating external dependencies.
Unit Tests: focused on application logic, tests the smallest unit of functionality, typically a method/function.

Unit testing works by isolating small “units” of code so that it can be tested from every angle. Any kind of dependency that is slow, untested, hard to understand or initialise should be stubbed or mocked so you can focus on what the unit of code is doing, not what its dependencies do. Tests should ideally be written by developers, the same person who writes the functionality, not a QA team. Demoting unit testing to a lower level of priority is almost always a mistake.

In-Place Editing With Contenteditable and AngularJS

In-place editing provides an easy way to let the user edit parts of a page without having to be redirected to an edit page. Instead, the user can just click around on a page an edit the elements he or she wishes to change – without reloading the page. When the user hovers over an editable area, the background color of the element changes. When clicked, the text becomes editable.

You can make an element editable by adding the contenteditable attribute in your markup. This attribute has three possible values: true, false, and inherit. Specifying inherit will make the element editable if it’s immediate parent is editable.

<div class="editable" contenteditable="true"></div>

The following directive uses contenteditable attribute and ng-model for data binding.

See the Pen Editing Page Elements with contenteditable by Gabo Esquivel (@gaboesquivel) on CodePen.


Node.js HTTPS and SSL Certificate for Development

HTTPS is the HTTP protocol over TLS/SSL and HTTPS is required to protect your data. It is the most popular network protocol for establishiing secure connections for exchanging documents on the internet. It is basically HTTP carried over a TCP socket, which has been secured using SSL. Transport Layer Security (TLS) and Secure Sockets Layer (SSL) are cryptographic protocols designed to provide communication security. In this post I’ll show how to create a self-signed SSL certificate and set up an express 4.0 project that uses it for local development purposes.

Self-Signed SSL Certificate

There are two kinds of certificates: those signed by a ‘Certificate Authority’, or CA, and ‘self-signed certificates’. A Certificate Authority is a trusted source for an SSL certificate, and using a certificate from a CA allows your users to be trust the identity of your website. In most cases, you would want to use a CA-signed certificate in a production environment – for testing purposes, however, a self-signed certicate will do just fine.

Differences Between TDD, ATDD and BDD

Test-driven development (TDD) is a technique of using automated unit tests to drive the design of software and force decoupling of dependencies. The result of using this practice is a comprehensive suite of unit tests that can be run at any time to provide feedback that the software is still working.

The concept is to “get something working now and perfect it later.” After each test, refactoring is done and then the same or a similar test is performed again. The process is iterated as many times as necessary until each unit is functioning according to the desired specifications.

ATDD stands for Acceptance Test Driven Development, it is also less commonly designated as Storytest Driven Development (STDD). It is a technique used to bring customers into the test design process before coding has begun. It is a collaborative practice where users, testers, and developers define automated acceptance criteria. ATDD helps to ensure that all project members understand precisely what needs to be done and implemented. Failing tests provide quick feedback that the requirements are not being met. The tests are specified in business domain terms. Each feature must deliver real and measurable business value: indeed, if your feature doesn’t trace back to at least one business goal, then you should be wondering why you are implementing it in the first place.

Behavior-Driven Development (BDD) combines the general techniques and principles of TDD with ideas from domain-driven design. BDD is a design activity where you build pieces of functionality incrementally guided by the expected behavior. The focus of BDD is the language and interactions used in the process of software development. Behavior-driven developers use their native language in combination with the language of Domain Driven Design to describe the purpose and benefit of their code.

A team using BDD should be able to provide a significant portion of “functional documentation” in the form of User Stories augmented with executable scenarios or examples. BDD is usually done in very English-like language helps the Domain experts to understand the implementation rather than exposing the code level tests. It’s usually defined in a GWT format: GIVEN WHEN & THEN.


TDD is rather a paradigm than a process. It describes the cycle of writing a test first, and application code afterwards – followed by an optional refactoring. But it doesn’t make any statements about: Where do I begin to develop? What exactly should I test? How should tests be structured and named? .When your development is Behavior-Driven, you always start with the piece of functionality that’s most important to your user.

TDD and BDD have language differences, BDD tests are written in an english-like language.

BDD focuses on the behavioral aspect of the system unlike TDD that focuses on the implementation aspect of the system.

ATDD focuses on capturing requirements in acceptance tests and uses them to drive the development. (Does the system do what it is required to do?)

BDD is customer-focused while ATDD leans towards the developer-focused side of things like [Unit]TDD does. This allows much easier collaboration with non-techie stakeholders, than TDD.

TDD tools and techniques are usually much more techie in nature, requiring that you become familiar with the detailed object model (or in fact create the object model in the process, if doing true test-first canonical TDD). The typical non-programming executive stakeholder would be utterly lost trying to follow along with TDD.

BDD gives a clearer understanding as to what the system should do from the perspective of the developer and the customer.

TDD allows a good and robust design, still, your tests can be very far away of the users requirements. BDD is a way to ensure consistency between requirements and the developer tests.

Get Started With Command Line and Z Shell

A quick introduction… To develop a web application tooling and workflow are very important. Taking the time to learn and master command line is not only highly recommended but it is required to make use of tools that will help you develop faster and gain more control of your workflow.

This post summarizes what you need to know to get going with command line as well sharing some personal recommendations on the setup of the command prompt on MAC OS X, but it applies to *nix as well.

What is a Shell?

The Shell is an application that offers interactive console or terminal access to a computer system. It lets you interact with applications on your computer through command line. A command-line interface (CLI) is a mechanism for interacting with a computer operating system or software by typing commands to perform specific tasks, a command-line interpreter then receives, parses, and executes the requested user command.

Most operating systems offer a command line interface, but that doesn’t mean the built-in version is best. MAC OS X comes with Terminal, however there’s a terminal emulator for Mac OS X that is more customizable and does amazing things out-of-the-box, it’s called iTerm. If you are using a windows machine I’d recommend you installing cygwin.

In order to use the command line prompt you will need to memorize commands. Start with the basic system commands, once you mastered that you’ll catch up quickly with other tools such as gulp.js.