March 2019

Xpand IT enters the FT1000 ranking: Europe’s Fastest Growing Companies

Xpand IT proudly announces our entry into the Europe’s Fastest Growing Companies ranking, compiled by renowned international journal the Financial Times! With sustained growth surpassing 45% in 2018, Xpand IT attained a place among the fastest growing companies, along with 1000 other European enterprises, taking into account their consolidated results between 2014 and 2017.

An income of 10 million and 195 collaborators were the figures that guaranteed our place on this list. Our income has since taken the leap to 15 million, and we can now count on the tireless work of more than 245 collaborators. And so, out of the three Portuguese tech companies distinguished with a spot on the ranking, Xpand IT can boast the best results in terms of income and the acquisition of new talent.

Paulo Lopes, CEO & Senior Partner of Xpand IT, said “Having a place on the FT 1000 European ranking is the ultimate recognition for all the work we have undertaken over the last few years.  We are renowned for our know-how and expertise within the technology arena, and now also for our unique team and business culture, focused on excellency and innovation, which makes it far easier to achieve these kinds of results.”

This year’s goal is to maintain our growth trend, not just by expanding into new markets, but also by increasing our workforce. In 2019, we expect to reach the beautifully rounded number of 300 Xpanders!

Sílvia RaposoXpand IT enters the FT1000 ranking: Europe’s Fastest Growing Companies
read more

7 steps to implement a data science project

Data science is a set of methods and procedures applied to a very complex, concrete problem, in order to solve it. It can use data interference, algorithm development and technology to analyse collected data and understand certain phenomena, identifying patterns. Data scientists must be in possession of mathematical and technological knowledge, along with the right mindset to achieve the expected results.

Through the unification of various concepts, such as statistics, data analysis and machine learning, the main objective is to unravel behaviours, tendencies or interferences in specific data that would be impossible to identify via a simple analysis. The discovery of valuable insights will allow companies to make better business decisions and leverage important investments.

In this blog post, we unveil 7 important steps to facilitate the implementation of data science.

1. Defining the topic of interest / business pain-points

In order to initiate a data science project, it is vital for the company to understand what they are trying to discover. What is the problem presented to the company or what kind of objectives does the company seek to achieve? How much time can the company allocate to working on  this project? How should success be measured?

For example, Netflix uses advanced data analysis techniques to discover viewing patterns from their clients, in order to make more adequate decisions regarding what shows to offer next; meanwhile, Google uses data science algorithms to optimise the placement and demonstration of banners on display, whether for advertisement or re-targeting.

2. Obtaining the necessary data

After defining the topic of interest, the focus shifts to the collection of fundamental data to elaborate the project, sourced from available databases. There are innumerable data sources, and while the most common are relational databases, there are also various semi-structured sources of data. Another way to collect the necessary data revolves around establishing adequate connections to web APIs or collecting data directly from relevant websites with the potential for future analysis (web scrapping).

3. “Polishing” the collected data

This is the next step – and the one that comes across as more natural – because after extracting the data from their original sources, we need to filter it. This process is absolutely essential, as the analysis of data without any reference can lead to distorted results.

In some cases, the modification of data and columns will be necessary in order to confirm that no variables are missing. Therefore, one of the most important steps to consider is the combination of information originating from various sources, establishing an adequate foundation to work on, and creating an efficient workflow.

It is also extremely convenient for data scientists to possess experience and know-how in certain tools, such as Python or R, which allow them to “polish” data much more efficiently.

4. Exploring the data

When the extracted data is ready and “polished”, we can proceed with its analysis. Each data source has different characteristics, implying equally different treatments. At this point, it is crucial to create descriptive statistics and test several hypotheses – significant variables.

After testing some variables, the next step will be to transfer the obtained data into data visualisation software, in order to unveil any pattern or tendency. It is at this stage that we can include the implementation of artificial intelligence and machine learning.

5. Creating advanced analytical models

This is where the collected data is modelled, treated and analysed. It is the ideal moment to create models in order to, for example, predict future results. Basically, it is during this stage that data scientists use regression formulas and algorithms to generate predictive models and foresee values and future patterns, in order to generalise occurrences and improve the efficiency of decisions.

6. Interpreting data / gathering insights

We are nearly entering the last level for implementing a data science project. In this phase, it is necessary to interpret the defined models and discover important business insights – finding generalisations to apply to future data – and respond to or address all the questions asked at the beginning of the project.

Specifically, the purpose of a project like this is to find patterns that can help companies in their decision-making processes: whether to avoid a certain detrimental outcome or repeat actions that have reproduced manifestly positive results in the past.

7. Communicating the results

Presentation is also extremely important, as project results should be clearly outlined for the convenience of stakeholders (who, in the vast majority of instances, are without technical knowledge). The data scientist has to possess the “gift” of storytelling so that the entire process makes sense, meeting the necessary requirements to solve the company’s problem.

If you want to know more about data science projects or if you’d like a bit of advice, don’t hesitate to get in touch.

Sílvia Raposo7 steps to implement a data science project
read more

Node.js: the JavaScript platform used by Netflix and Uber

The progressive and noticeable growth of JavaScript is hard to ignore. Over the years, this programming language has singlehandedly provided hundreds – if not thousands – of frameworks and libraries, helping developers and companies to create websites, portals, and interactive and agile applications, with modern interfaces. Adding the fact that JavaScript is completely independent from other platforms, easy to learn and supported by an ever-growing community, among many other advantages, it is easy to understand why.

However, for a long time, JavaScript was a language exclusively oriented towards client-side development and never managed to establish itself for backend purposes – at least until 2009, when the first version of Node.js was launched. For the first time in history, JavaScript became a viable alternative for backend solutions.

It is important to demystify the fear that many companies have about this alternative to more traditional backend solutions (Java, .NET, etc.) in the world of Enterprise applications, even though companies including Netflix, Trello, PayPal, LinkedIn, Uber, Walmart, NASA, Intel and Twitter have already successfully implemented Node.js in their infrastructures – and this list continues to grow each day.

For those who are not familiar with Node.js, it is important to highlight some of its biggest advantages:

  • Ideal for the construction of real-time applications;
  • Facilitates the programmer’s full stack vision in JavaScript (as both backend and frontend languages are the same);
  • Decreases development time, thanks to its full stack view;
  • Supported by a gigantic community that contributes new libraries and updates at an astonishing rate;
  • Extremely fast code execution;
  • Ideal in architectures oriented towards micro services.

We can now go back to what we really want to discuss: why should companies adopt Node.js for their applications? In a nutshell, because it was designed for large-scale applications, offering a modern perspective on how to develop applications with complex architectures.

How those capacities actually come to fruition is the most important aspect.

Scalability is essential for the vast majority of current corporate applications, and Node.js responds to that necessity by offering a base clustering module with load balancing on multiple CPU cores. Associating the clustering power with a single-threaded, non-blocking solution, specifically designed for events and callbacks, allows it to handle multiple connections simultaneously, processing millions of concurrent connections.

Being single-threaded is often regarded as a limitation because, theoretically, it can slow down the performance of the application, but that is nothing more than a myth. On solutions that are not oriented towards events, where multiple threads are necessary to deal with multiple requests, the number of parallel threads is limited. Node.js is completely free from these limitations. As long as there’s available memory and if the kernel allows it, we can effortlessly process any number of simultaneous requests.

Companies are also generally afraid to place their code in the Cloud, which would prevent the usage of the NPM (Node Package Manager). In order to address this issue, we have created a new Enterprise version that can be installed and maintained on companies’ own infrastructures, therefore preserving their internal module registry and complying with the strictest security requirements.

We also need to touch on the subject of long-term support. This will always be a priority for Enterprise solutions, but the truth is that Node.js also assures that very same support.

Each major version of Node.js will include active support for 18 months from the period it becomes eligible for LTS (Long Time Support), after which it will transition to a maintenance regime with a duration of 12 additional months. During this period, the version used will receive security updates and bug fixes, but new functionalities will not be added. In this way, we have addressed the potential problem that causes the absence of support for solutions developed with the help of Node.js, due to its lack of longevity.

Based on all this information, the aforementioned companies decided to make their transition to this technology. What have they accomplished?

  • Netflix: a reduction of over one minute on buffering times.
  • LinkedIn: rebuilt the core of their mobile services with Node.js. Their application is currently running 20 times faster and benefits from a substantially better integration between backend and frontend. This was achieved while Node.js was just in its first year of development.
  • PayPal: migrated all their web applications from Java to JavaScript and Node.js and saw their programmers writing 33% less lines of code, using more than 40% less files and reducing by half the necessary time to build their applications (while also requiring less people). Response times have decreased by roughly 35%, which translates to an improvement of 200 ms in page creation times.
  • Uber: built their interpersonal system between drivers and passengers with Node.js, due to its fast response capabilities and massive power to process requests, along with the welcome ease and ability to have a distributed architecture.

I don’t want to plant the idea that Node.js is a “silver bullet”. It might not be the best solution for all cases, but it is always wise to evaluate your possibilities and understand the potential benefits of this technology.

Francisco Costa

Enterprise Solutions Lead

Francisco CostaNode.js: the JavaScript platform used by Netflix and Uber
read more

The impact of Big Data on Social Media Marketing

Social media was born with the intent to create remote connections between colleagues, friends and others who wanted to share knowledge and information. Even though this purpose is still prevalent in its genesis, the truth is that social media has been evolving exponentially throughout the years, becoming a powerful bi-directional communication tool between companies and clients.

Nowadays, social media allows companies to publicise their brand and products, facilitating the rapid growth of their client base while also allowing the ceaseless collection of inputs from their users, whether they are clients or not.

For that reason, each like, comment or share gives companies a better understanding of their clients and their respective behaviours, through the way in which they interact with specific types of content. This behavioural analysis and exchange of information generates a massive volume of data, which can only be stored and processed using “Big Data” technologies.

In reality, Big Data has impacted on almost every sector of our daily lives, shaping the way people communicate, work and even have fun.

In recent decades, the quantity of generated data has been growing exponentially, doubling its size every two years, potentially reaching 44 trillion gigabytes in the year 2020. The massification of the World Wide Web and the Internet of things abruptly increased the amount of generated data, equally intensifying the necessity to diminish the time it takes to transform and access that same data.

Big Data is the technological concept that encompasses a particular set of practices and tools, tackling this problem using 5 fundamental principles:

  • Volume (storing, processing and accessing vast amounts of data)
  • Variety (cross-referencing data from various sources)
  • Speed (data access, treatment and processing speed)
  • Veracity (guarantee the veracity of information)
  • Value (usefulness of the information processed)

This “new” data access method and processing power has established a new paradigm within the marketing sector. Now it’s easier to analyse and identify trends, as well as possible cause and effect relationships to apply to marketing strategies. These types of analyses have become indispensable to companies for increasing the percentage of messages that actually reach the target, resulting in the growth of their ROI (return on investment).

How do we take advantage of Big Data in a marketing strategy?

The first step is to establish a relation between non-structured data, provided by social media, and already available data, such as your clients’ details. After completing this step, it will be easier to observe and analyse your clients’ actions, thus collecting important insights that will form a solid base for your future campaigns.

Now you can outline marketing strategies focused on all the insights you’ve gathered. In other words, you are now able to design marketing campaigns anchored by content that fulfills the needs of your clients, or segmented groups of clients.

Execution time has arrived! Now you possess the most actionable content, based on your analyses, let’s discover the degree of effectiveness of your strategy.

You’ve almost certainly worked out that this is a fundamental formula to success, but reaching that sweet spot will require constant “fine-tuning”. In other words, from this point forward, your digital marketing strategy will work in cycle: the number of insights about your clients and the reach and suitability of your strategies and content are proportionately higher, which in turn implies more insights.

Social media marketing is a tool that allows a company of any dimension and in any market to better understand its clients and work out the most effective strategies to shape its offers in order to satisfy the needs of its clients.

The truth: without Big Data, none of this would have been possible!

Sílvia RaposoThe impact of Big Data on Social Media Marketing
read more