Domain Model and Security Principles of the re.alto API Platform
Development
This article is intended as a guide/intro for developers/architects using our API platform.
20.01.2025
Development

Introduction
The vision of re.alto is to support businesses in developing outstanding products by providing APIs and energy-driven data solutions to help build digital products faster. We do this by connecting to devices through their existing IoT connectivity. At the core of our solution is a powerful IoT management platform. It connects to any type of device, streams device data in real-time and securely stores it for future retrieval. The platform can stream thousands of data sets per second, it can aggregate readings, it can retrieve charge data records (where available), and it can be used to manage and steer devices. Integration is also straightforward.
The guide below explains our domain model, the terminology used and our IoT platform’s security principles.
Domain Model Components (Terminology and Set-Up)
The platform is structured in Tenants. Tenant refers to a customer environment. Every Tenant has an administrator. The Tenant admin controls everything within that Tenant. The Tenant admin can be either a person or a program/app, which are known as Principals of type User or Client respectively. A Client is usually used by a backend system/process like an app and has a Client ID and a Client secret. A User logs in using an email and password. It is this Client ID or User ID that defines what you have access to see on the platform. The Tenant admin is also a Principal (which is therefore either a Client ID or a User). Members are Clients or Users that are part of a Tenant but are not admins. Members also either have a Client ID or a User ID, however members cannot remove/add themselves or other members to a Tenant, only the Tenant admin has the right to do this.
In each Tenant, the Tenant admin can onboard devices which we refer to as Entities. An Entity is added in the system via an onboarding request raised by a Principal with access, which also becomes the owner unless a different owner is specified in the request. Any sort of device that we onboard becomes an Entity and receives an Entity ID. Each Entity has an owner. The Entity owner has the right to change its properties. Members have reduced rights and can read the data but cannot alter the properties of an Entity.
Entities can be grouped together in Collectives. A Tenant can have multiple Collectives, making it easy to separate different Entities into groups (depending on company they belong to, for example). Entities that are grouped together in Collectives can be displayed together. Each Collective has an owner that is assigned by the Tenant admin, and multiple members can be added to each Collective, all of whom then have rights to see the data of the Entities within that Collective. “Collective” refers to a group of Entities and of Users who are members of a Collective. A Collective of Entities has a Collective owner and Collective members. The data from all Entities in a Collective can be shared with a number of Principals (User or Client IDs). The owner of the Collective can set certain parameters on an Entity, such as its name. Members can only use the Entities (ie: read their data).
The Collective is a powerful tool to link various Entities together and then share the data with other people or programs. For example, a fleet manager could use a Collective to conveniently see the data from all of their company’s vehicles in one place. However, a Collective could also refer to a household with multiple cars, a heat pump etc, and any member of that Collective could then view the data from all Entities within that Collective.

Security
The security principles are based on the domain model explained in the first part of this article. You must be the Tenant owner/admin or member of the Tenant, or the Collective owner or member of the Collective, to be able to see the data of a device. To authenticate against our platform, a Client ID or User ID is required. Once you have that, you must be the owner or member of a Tenant or Collective in order to access data. Every individual record, Tenant, Entity and Collective is secured with these security rights. The only way to access our platform is to have a Principal ID, which is either the Client ID (for programs) or the User ID (for people). This ID is either a member of a Tenant or a Collective, or the owner of an Entity. This determines whether you can see that Entity and its data and do something with this data or not. If you do not have rights to any Entities, Tenants or Collectives, you won’t be able to view any data.
re.alto’s customer can have one Tenant on our platform but organise onboarded Entities into various Collectives within that Tenant. This means if Company A is working with various companies/fleet managers, for example, they can onboard the vehicles from various companies and organise each of these into their own Collective, meaning each company/fleet manager will only be able to see the data from the cars in their respective Collective and not the data from cars organised into a separate collective by Company A. Any vehicle added to the Collective later can also easily be viewed without any additional work – that is the power of the Collective. Company A is the owner of the Collective within their Tenant, but they can make Fleet Manager A a member of a Collective and assign them rights within that Collective, so that they can see data from vehicles within the Collective. But they will remain unable to view data from vehicles in the other Collectives within Company A’s Tenant. You have to be a member of a specific Collective to see the data from entities within that Collective – and that is where the domain model meets the security model.
Thank you
You have been subscribed for the newsletterAPI Marketplace Fix (Following Elia API Updates)

Provider updates made to some of the APIs on our portal earlier this year caused them to suddenly stop working for our Marketplace users. One of re.alto’s developers quickly explored the reasons why and successfully found a solution for our users, resolving the issue efficiently for those actively using the APIs. More below.
The re.alto API Marketplace enables consumers to find and integrate with third-party energy data via APIs and offers providers a platform to easily advertise and sell their data. In the rare case that access to an API no longer functions as it should, re.alto’s developers will resolve the issue as soon as possible. A recent example of this was when one of the providers on our marketplace, Elia, made changes to the structure of their APIs earlier this year, leading to some technical issues for those trying to call the APIs from the re.alto platform. The issue was flagged and re.alto quickly began investigating and working towards a solution.
The issue occurred on 22nd May 2024 when Elia changed their public APIs to make them more powerful with the downside that the APIs became fragmented. The Elia databases are very large, and they greatly improved the versatility of their APIs by updating them to enable users to be far more specific when filtering for data, something which is quite rare in APIs and offers the user far more control in specifying exactly which dataset they want to view. With such a large amount of data available (quarter-hourly data, minutely data, historical data, near real-time etc.), this separating of the data via different URLs based on time frame makes the quantity of data far more manageable for many of the users seeking a specific dataset. Normally, users would receive a large amount of data and then have to filter it themselves on their end, depending on their needs. Via Elia, users can now filter the desired data before receiving that data (for example group it, order it, offset it, limit it). This gives more power and control to the user. But in making the APIs more versatile, the data was split up into new categories, with the consequence being that users now needed to use several different URLs to separate databases to obtain the same kind of data that was previously available through a single URL. Via Elia, users are now required to make calls to different APIs depending on the date, time frame and type of data that is required. For example, Elia has now split their Imbalance Prices API (one of the more popular ones on our marketplace) into six separate APIs, depending on the date/type of data required. There are imbalance prices per quarter hour and per minute as near real-time data but also as historical data, and some of these categories are then split into different tables again depending on whether the data is from before or after the changes were implemented on 22. May 2024 (for example, historical data before or after this date). So, where there used to be one API for this huge amount of data, the data is now found in various data tables each reached via a different API/URL. The update to the APIs on Elia’s side made our implementation of the Elia APIs suddenly unusable because the URL had changed and the data had been split into many different tables.
The question re.alto then faced was how we could continue to provide this data through our marketplace without our customers having to do multiple API calls to various separate data tables on Elia’s side. The users of re.alto’s marketplace value simplicity in obtaining data. We wanted to resolve the issue by keeping the API as simple and familiar to use as possible for our users and therefore maintain the value in our implementation. Before the changes, users could specify the day they wanted data from, and the API would provide all the data from this date. While updating the URL in our API gateway would have been a simple fix, we would have then had to add various APIs to the marketplace to cover all the same data that the single API had provided access to previously, due to it now being found in different data tables at Elia. Instead, we wanted to provide value to our users by simplifying the API calls so that the data from multiple data tables would be consolidated back into one single endpoint without having to take those different data tables into account on their end. One of our developers updated the API definition and programmed our gateway to pull the requested data from various Elia data tables by configuring a branching redirect to the various tables dependent on the date given as a parameter. This means that data from various data tables is still available on re.alto via one single connection, keeping it simple for our users. Thanks to re.alto’s development team, our users can just continue to specify the required date/time frame and they will see all of the expected data as before. The simplicity of calling these APIs has therefore been maintained for our marketplace users, meaning the changes from a user perspective are now minimal.
If you’d like to learn more about our API Marketplace or our IoT connectivity solutions, please reach out via the contact us page on our website.
(The Imbalance Prices API is now split into six via Elia.)
Image source: Elia
Thank you
You have been subscribed for the newsletterReal-Time Syncing of API Documentation to ReadMe.io
Development
This guide shows you how to sync API documentation to readme.io in real-time.
01.08.2024
Development

For an API to be successful, it needs to meet certain quality criteria, such as:
- intuitiveness, developer friendly, focused towards easy adoption
- stability and backward compatibility
- security
- very well documented with up to date documentation
Focusing on the last one raises a few questions:
- How do we document?
- How do we make the documentation public?
- How can we keep the API and the documentation in sync in real-time with minimal maintenance effort?
How do we document?
The sky is the limit here, but on the other hand there is also a clear standard to describe APIs which is OpenAPI Specifications.
For our services, the spec is automatically generated from the C# XML doc comments.
This will output a JSON or YAML standardised specification that can be interpreted by any other tool higher up the stack that supports the same standard. The ubiquitous swagger UI is the most well known example.
That is great for local development and internal access, but once the API goes live we need something publicly accessible by any consumer or by our partners.
How do we make it public?
This is usually done by uploading the specs in some kind of developer portal that supports OpenAPI Specs and offers support features like partner login, public access, high availability, user friendly UI, playground to try the API etc.
While there are a few examples and solutions, we will focus on ReadMe.io which basically takes as input an OpenAPI spec and layers a nice UI on top of it.
Keeping it all in sync:
The goal is, once a developer makes a change to the API, as soon as the code is deployed to production and ready to be consumed, the documentation magically updates at the same time without additional work. No copying and pasting of text, no manual editing, no updating of wiki pages etc.
The immediate go-to solution is to automate this process in the CI/CD pipeline.
What we need:
- tooling to push the file in readme.io – this is provided by readme
- an API spec file (json/yaml) – as input to the readme CLI
- automation in the CD Azure pipeline – to call the CLI whenever we have a code update
Here are the Azure pipeline tasks that do the trick:
- task: CmdLine@2
displayName: 'Install readme.io CLI'
inputs:
script: 'npm install rdme@latest -g'
- task: CmdLine@2
displayName: 'Update OpenApi spec in readme.io'
inputs:
script: 'rdme openapi https://$(PUBLICLY_ACCESSIBLE_HOST)/swagger/v1/swagger.json \
--key=$(RDME_KEY) --id=$(RDME_SPEC_ID)'
Copy code
There are a few important points here that can be solved in other ways depending on your setup.
The CLI is run from the context of an Azure pipeline, so the pipeline needs a way to reach the swagger.json file. Either you somehow:
- put it in an DevOps artifact and use it as a local file or
- just take it from a deployment location that is publicly accessible from the CI/CD pipeline
The last two arguments are the access key to readme.io and the API specification ID in readme.io, they can be easily obtained from the readme.io admin panel. If you have multiple APIs, you will have multiple different IDs. They are stored as locked variables in the Azure DevOps library.
Private infrastructure:
You might run into the case where your API deploys inside some infrastructure where Azure DevOps pipelines do not have access. This was our case:
We built our custom API gateway using YARP (which isn’t very compatible with swagger).
You can work around such a problem by exposing the swagger endpoint in the API gateway.
If you have multiple services, you expose them on different endpoints in the API gateway. YARP requires unique URLs for different endpoints.
For example:
- [
https://$](<https://$/>)(API_GATEWAY_CUSTOM_DOMAIN_HOSTNAME)/swagger/myAPI1/v1/swagger.json
- [
https://$](<https://$/>)(API_GATEWAY_CUSTOM_DOMAIN_HOSTNAME)/swagger/myAPI2/v1/swagger.json
Then implement a URL transformer in YARP to rewrite the URL to /swagger/v1/swagger.json
just before forwarding the requests to the private services.
This is enough for most scenarios, but there is yet another special case that can occur on top of everything else
Special scenario:
In case you have multiple APIs, you might keep the input/output models of that API in a separate nuget package that is referenced by all APIs.
In this scenario, the generated OpenAPI Spec will miss all the XML documentation that is part of the external nuget.
All the underlined lines will be missing by default!
The problem is that there is a bug in the .NET project system that prevents from copying the existing XML doc file located in the nuget package in the output folder of the API.
A few seconds later….

How to fix it:
- A bit of manual intervention is required inside the .csproj file:
Copy code
This will iterate over all dll files and will try to copy the respective XML files to the output directory, if the file exists and if it matches the name of our models’ nuget. Otherwise a lot more XML will be copied over and will needlessly increase the deployment size.
This will work beautifully locally, but it will not work in a docker environment where the image is built by Azure pipelines.
- To make it work from the pipelines, where you will likely use
dotnet publish
command, a new line is to be added after the initial copy operation.
Copy code
The only difference will be the destination folder, instead of the output path it will be the publish directory.
- The last touch is to add the
NUGET_XMLDOC_MODE
environment variable to instruct the restore operation to unpack the XML docs from the nuget packages together with the DLLs. This is set by default toskip
to shave a few seconds off the pipeline run time.
ENV NUGET_XMLDOC_MODE=none
RUN dotnet restore "myApiProject.csproj"
RUN dotnet build "myApiProject.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "myApiProject.csproj" -c Release -o /app/publish
Copy code
Wrap up:
To ensure the success of your API, don’t neglect the documentation side of it. Never rely on manual updates of wiki pages, Confluence pages or even using the UI editors that some tools and services put at your disposal.
Always strive to automate and correlate the API code deployment with updated docs deployment.
Depending on each particular case, the system and technology stack used and the tactics used, the hoops you might need to jump through to achieve this can vary – but the strategic goal should be the same.
Generating Stellar OpenAPI Specs
Development
This guide by our development team shows you how to generate stellar OpenAPI specs.
23.07.2024
Development

Nowadays, OpenAPI Specifications are the de facto standard for describing and documenting APIs, enabling easy interoperability between API providers and API clients and consumers.
Producing a high quality OpenAPI spec will have a significant impact on the success of your API as it is the basis for a lot of tools that sit higher in the stack like:
- swagger UI
- developer portals
- autogenerated API clients (language agnostic)
In the .NET world, the most common way to generate an API spec is by using swagger. When properly configured, swagger supports versioning and automatic documentation based on the autogenerated XML docs.
Hence the better and more complete the XML docs, the better the resulting documentation and specification of the API.
In the next sections, we will be putting it all together from a 0 to a hero API spec that is:
- versioned
- serves multiple audiences
- well documented
- automation ready
Enable Versioning
The first step is fairly simple. We enable versioning and configure swagger:
public static IServiceCollection AddVersioning(this IServiceCollection services)
{
services.AddApiVersioning(options =>
{
options.DefaultApiVersion = new ApiVersion(0, 0);
options.AssumeDefaultVersionWhenUnspecified = true;
options.ReportApiVersions = true;
})
.AddApiExplorer(options =>
{
options.GroupNameFormat = "'v'VVV";
options.SubstituteApiVersionInUrl = true;
});
return services;
}
Copy code
The configuration part is as generic as possible to be reusable in multiple different APIs:
public static IServiceCollection AddSwagger(this IServiceCollection services,
string apiTitle, string apiDescription)
{
services.AddSingleton(new SwaggerApiMetadata(apiTitle, apiDescription));
services.ConfigureOptions();
return services.AddSwaggerGen(c: SwaggerGenOptions =>
{
//* ... */
var xmlDocumentationFilePaths = Directory.GetFiles(AppContext.BaseDirectory,
"*.xml", SearchOption.TopDirectoryOnly).ToList();
foreach (var fileName in xmlDocumentationFilePaths)
{
var xmlFilePath = Path.Combine(AppContext.BaseDirectory, fileName);
if (File.Exists(xmlFilePath))
{
c.IncludeXmlComments(xmlFilePath, includeControllerXmlComments: true);
}
}
});
}
Copy code
The above are used like this from an API project:
builder.Services.AddVersioning();
builder.Services.AddSwagger("Connect Readings API",
"Provides endpoints to get entity readings.");
Copy code
Now let’s unpack what is going on.
- In lines 4 and 5, the
SwaggerApiMetadata
record is needed to be able to encapsulate and then inject the API properties specified at compile time (the title and description) in theSwaggerGenOptions
configurator at runtime. - Just before the code in line 7 will execute, the
ConfigureSwaggerOptions
configurator will run and will set the metadata for each version defined in the APIs. The configurator looks something like this:
class ConfigureSwaggerOptions(IApiVersionDescriptionProvider provider,
SwaggerApiMetadata apiMetadata) : IConfigureOptions
{
public void Configure(SwaggerGenOptions options)
{
foreach (var versionDescription in provider.ApiVersionDescriptions)
{
options.SwaggerDoc(versionDescription.GroupName,
new OpenApiInfo
{
Title = apiMetadata.Title,
Version = versionDescription.ApiVersion.ToString(),
Description = apiMetadata.Description
});
}
}
//shortened for brevity
}
Copy code
Line 11 to 21 will pick up on all XML doc files that are found and loaded into swagger.
Line 9 is a placeholder for features detailed in the next section.
Customising Swagger Generation
There are times when you want to make programmatic changes to the way the final OpenAPI spec json file is generated. For example, exposing in the OpenAPI spec an HTTP header that is used by all endpoints, which in turn will show up on the UI to be easily filled out by the user.
For this, we can use swagger filters. Think of filters like .NET middleware but for swagger.
services.AddSwaggerGen(c: SwaggerGenOptions =>
{
c.DocumentFilter();
c.DocumentFilter();
c.OperationFilter();
// ...
}
Copy code
Document filter applies to the whole document while the operation filters are applied for each endpoint path.
Everything so far can be nicely packaged in a shared nuget library to be reused across multiple API services.
Multiple API Audiences
It could often be the case where you might have in the same API service both public endpoints that are exposed to partners via a gateway and private, internal ones that are only used by your team to carry out admin tasks or what not.
The internal endpoints will be part of the same logical API version, ex: v1, v2, v3 etc. but should be hidden from the public spec that the partners really see, and on the other hand still be visible in the internal swagger UI where your team (developers) can have access.
Considering the infrastructure pieces from previous sections are in place there are only two things left to do:
- define internal versions of the API
[ApiController]
[ApiVersion("1.0")]
[ApiVersion("1.0-internal")]
[Produces("application/json")]
[Route("v{version:apiVersion}/entities")]
public sealed class MyController : ApiController
{
...
}
Copy code
- map endpoints to the either public, private or both type of versions
[HttpGet("last")]
[MapToApiVersion("1.0")]
[MapToApiVersion("1.0-internal")]
[ProducesResponseType(typeof(MyResponse), 200)]
public async Task> GetResponseAsync(...)
{
...
}
Copy code
Strictly speaking there are two asp.net versions defined but only one logical version (1.0). If you leave out one version, that endpoint will simply not appear in the generated spec for that asp.net version.
This will generate a swagger UI that will allow us to pick which definition to use:
Fine Tuning
Usually you will need to fine tune the specs based on the audience. For example, if we want to expose the spec in a developer portal that will probably have a different host server than the localhost version in swagger UI. The ideal place to make these conditional tweaks is in swagger filters.
For instance, the AddServerFilter
from the previous section can look something like this:
public class AddServerFilter : IDocumentFilter
{
public void Apply(OpenApiDocument swaggerDoc, DocumentFilterContext context)
{
var url = "https://platform.domain.com/api"; //
if (!context.DocumentName.Contains("-internal"))
{
swaggerDoc.Servers.Add(new OpenApiServer { Url = url });
}
}
}
Copy code
This will add the following attribute in the resulting OpenAPI spec, that will be later picked up by developer portal tooling and UI.
"servers": [
{
"url": "https://platform.domain.com/api"
}
],
Copy code
Mapping XML Docs to Swagger UI
The quality of the resulting OpenAPI spec is determined from the quality on the XML docs, attributes and signature and data types of the API endpoints. The more complete they are, the easier it is to use swagger UI or any other similar tool based on OpenAPI specifications.
Take the following code as a source:
///
/// Returns the last reading
///
/// The id of the entity.
/// The type of the reading data.
/// A that represents the asynchronous operation.
/// Returns the most recent normalized reading for an entity.
[HttpGet("last")]
[MapToApiVersion("1.0")]
[MapToApiVersion("1.0-internal")]
[ProducesResponseType(typeof(Connect.Api.Models.Reading.V1.Reading), 200)]
public async Task> GetLastReadingAsync([BindRequired] [FromRoute] Guid entityId,
[FromQuery] string? type = "electricity")
Copy code
This will generate the following swagger UI. Notice the entityId example and default value for type. They are both pre populated and ready to use. Notice the remarks section which is totally optional and not added by default. Once present, it will give the API endpoint a more detailed description.
Other tools provide similar capabilities and will offer the clients of the API a greatly improved experience.
Using the same XML docs in the models that are used as part of the endpoint interface, we obtain a nicely documented schema. Notice all the descriptions in the response schema below:
Next Steps
At this point, we have a well documented spec with multiple versions for both internal and public clients that is fine-tuned to work locally but also for publishing to a publicly accessible location from where partners can try out our API.
The next step will be to put in place some automation that will do this publishing automatically and keep it in sync with the code throughout the lifetime of the API.
Coming up: Our next technical article will look at real time syncing of API documentation to Readme.io.
Scaling with Azure Container Apps & Apache Kafka
Development
This article explains how to scale using Container Apps and Apache Kafka.
11.06.2024
Development

This article, written by re.alto’s development team, explains how to scale with Azure Container Apps and Apache Kafka. While such documentation already exists for use with Microsoft products, our development team did not find any similar documentation on how to scale containers using Apache Kafka and Azure Container Apps. We have since figured out how to do this ourselves and want to share our knowledge with other developers. The article below is intended to act as a guide for other developers looking to do something similar.
(For an introduction to this topic, please see our previous article on containerisation and container apps here.)
To allow containers, specifically replicas of a container, to be scaled up and down depending on the number of messages flowing through an Apache Kafka topic, it is possible to set up scaling rules for the container. But first, we need to create a container image that can consume messages from an Apache Kafka topic.
Consume message from an Apache Kafka topic:
To consume messages from an Apache Kafka topic, various solutions are already available. Since we have chosen to host our Apache Kafka cluster with Confluent, we have also decided to use their nuget package. You can find it on the nuget.org feed under the name Confluent.Kafka.
When you install this nuget package, you can use the ConsumerBuilder class to create a new consumer and subscribe to a topic.
Now all that is left is to consume the messages.
Now you have the code to consume messages from an Apache Kafka topic. If you would like more information, you can have a look at the documentation provided by Confluent at https://docs.confluent.io/kafka-clients/dotnet/current/overview.html. Now we build a container image with this code inside, register it at our container registry and create a container app based on the image – but how do we tell Azure Container Apps how to scale this container?
Scaling Azure Container Apps:
Azure Container Apps uses a KEDA scaler to handle the scaling of any container. You will not have to configure the KEDA scaler yourself, but you will have to tell the container which scaling rules to use. You will find a lot of examples on the Microsoft documentation pages on how to scale based on HTTP requests or messages in an Azure Service Bus. However, if you would like to know how to configure the scaling rules for an Apache Kafka topic, you may find yourself out of luck with the available documentation. But we explored and managed to do it like this:
You will need to set up a custom scaling rule for your container. We are using bicep to deploy our containers, therefore the examples shown below will be in bicep, but they will translate easily to any other way of deploying to Azure. To understand bicep or have a look how to create a bicep file for an Azure Container App have a look here: https://learn.microsoft.com/en-us/azure/templates/microsoft.app/containerapps?pivots=deployment-language-bicep.
We are going to focus on the scaling part of the container, so to begin with, you need to configure the minimum and maximum number of replicas that the container can scale between.
In the example above, we have set the minimum number of replicas running to 0, which means it will scale down to zero replicas running when there are no messages to consume from the Apache Kafka topic. This means that we can save on costs, as well as freeing up those resources for something else.
The maximum number of replicas is set to 6. You can go as high as you like, but going above the number of partitions in the Apache Kafka topic would be pointless, since any replica above the number of partitions will not be able to consume messages. So this number should be set to any number that is less than or equal to the number of partitions of the Apace Kafka topic.
Now let’s add the rule.
The name of the rule can be anything you like; the name chosen here is just an example. After configuring the name, we need to define the rule. In our case, it is a custom rule. The first thing that the custom rule needs to know is the type of the rule. This translates to the KEDA scaler that is going to be used for the rule. We want to use the Apache Kafka KEDA scaler, but any available KEDA scaler can be used here.
Next comes the metadata, which means we need to provide the bootstrap servers, the consumer group and the topic that was used in the code sample to inform the KEDA scaler which Apache Kafka topic to listen to and determine whether scaling up or down is needed.
And as a final step, we need to allow the KEDA scaler to access the Apache Kafka topic. We have created a few secrets to store the required information (please look at the Microsoft documentation regarding deploying Azure Container Apps mentioned above for more information). We will have to provide the bootstrap servers, the username and password and the type of authentication to the KEDA scaler to allow it to connect to the topic. Once this is all configured and the container is deployed using the bicep module, you have your Azure Container App configured to scale based on the number of messages in the Apache Kafka topic.
Thank you
You have been subscribed for the newsletterContainerisation: Introduction & Use Case
Development
An introduction to the technologies used in containerisation (by re.alto’s dev team)
11.04.2024
Development

Containerisation: An Introduction to Technologies Used in Containerisation
& a re.alto Use Case Example
In this article, we want to focus on the more technical side of re.alto and are going to highlight some of the technologies currently used by our developers when it comes to container apps.
What is a container/containerisation?
Containerisation is the bundling of software code with all its necessary components into one single, easily portable package (a “container”). It allows software developers to deploy and scale applications more efficiently and removes the need for running a full operating system for individual applications.
Docker, an open-source platform for developing software in containers (applications and any supporting components), enables developers to separate their applications from their infrastructure. Developers write code to create a container image, this image is then deployed to a container image repository. A container is based on a container image – when the container starts up, it will pull the image and execute the code inside it. Depending on the number of requests (ie: calls to an API), a container can have zero to a lot of replicas (instances of a container) running simultaneously, in which case an orchestration tool such as Kubernetes is often required to handle this.
Why Kubernetes?
Kubernetes is an open-source platform that manages containerised applications and services as clusters. Kubernetes offers a framework for software developers to run distributed systems efficiently, handling the scaling of applications and providing other useful features, such as load balancing and self-repair of containers. However, it can be complicated to manage your own Kubernetes cluster without prior experience of doing so. Most developers lack interest and knowledge in this area as it is expertise that falls more within an infrastructural role in a company – and many start-ups have not yet scaled to the size where this hire is necessary. That is where Microsoft Azure Container apps comes in.
Game changer: Microsoft Azure Container Apps
Like many smaller companies (and especially start-ups), re.alto does not have the time or resources to manage our own Kubernetes cluster, so we decided to use Azure Container Apps, an orchestration service introduced by Microsoft in 2022 for deploying containerised applications. This service really is a game-changer for start-ups, as it greatly simplifies the management of a Kubernetes cluster. Developers can still create containers but no longer face the hassle of configuring and maintaining their own cluster, as Microsoft does all this for them. With Microsoft’s managed environment taking over the orchestration of their cluster, developers can focus more on the actual container apps they want to build.
Our use case: combining technologies
How we use containerisation (Azure Container Apps) and Apache Kafka to get data from a smart meter dongle and stream it elsewhere.
One use case, for example, focuses on the flow of our Xenn P1 dongles. The image above shows the stages of this flow. In this case, the IoT device – our Xenn dongle (attached to a smart meter for use with re.alto’s Xenn app) – pushes a message to an MQTT broker. A container consumes this message. The raw message is then distributed by Apache Kafka. Kafka has the ability to listen to millions of messages per second and has the benefit of storing messages for a certain period of time (in our case, seven days). That means if the connection to one container is temporarily lost, or if we stop and restart it, it will still read and process any messages from that period once it is back online – meaning no messages/commands are missed or lost. Kafka keeps track of which consumer groups (or container apps) have read which messages and, although we don’t do this currently, it is also possible to set up a schema for an Apache Kafka topic which only allows you to push messages in a specific format into a topic, guaranteeing that each message is of a specific format/standard. Using Kafka is greatly beneficial to companies like re.alto as we have far too many data streams to manage all of them ourselves. The message is then picked up by another container app and is stored in our data storage. While all containers receive the same message, they can be programmed to follow different instructions. This enables us to have containers with individual responsibilities, such as a container for tracking peak detection. In this case, the container would extract the relevant parts of the message and compare the readings with all other readings in the same quarter-of-an-hour to determine whether the consumer will run into a peak. If this is the case, the consumer receives a push notification in the Xenn app.
Azure Container Apps provides start-ups with an invaluable service in the management of Kubernetes clusters, while Kafka simplifies handling large amounts of data streams and ensures no messages from IoT devices get lost or go unread.
In our next article, we’ll be looking in more technical detail at how to scale with Kafka and Azure Container Apps and will provide you with some of the code needed to do so.
Thank you
You have been subscribed for the newsletterTutorial: Building an Energy Data API from Scratch (in Under an Hour)
Development
Our developer guides you on how to build an energy data API.
Energy APIs/Development

Web APIs are becoming increasingly widespread in the energy sector. Historically they have been a popular technology for communicating market data (e.g. wholesale electricity prices) and grid information. Now we are seeing more and more parties adopting APIs as part of their digitalisation strategy.
This tutorial is for anyone that wants to get a better understanding of what an API does and how to build one from scratch in under an hour- all in the context of the energy sector.
What We'll Be Building: a PPA Data Engine
For this tutorial, we are going to pretend that we are a market intelligence company that is researching trends relating to Power Purchase Agreements (PPAs). Our goal is to create and maintain an internal database of PPA deals which we can interact with via an API. The API will be simple but allows us to Create, Read, Update, and Delete data (typically referred to as a CRUD application).
To develop our API, we the going to use a popular web development framework called Express.
Express is described as “a minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile applications”. Node.js is a JavaScript runtime environment built on top of Chromium’s V8 engine. It essentially enables you to run JavaScript code outside of a web browser (e.g. for building scripts, backends, APIs, etc).
There are tons of other web development frameworks that can be used, some examples include:
- Koa.js, Meteor.js (JavaScript)
- ASP.NET Core (C#)
- Django, Flask (Python)
- Laravel (PHP)
- Spring (Java)
For this tutorial we’re going to use Express since it allows us to get up and running very, very quickly 👍
You can find all of the code we will be writing for this project on our GitHub page.
Tools and Set-Up
Before we start, we’re going to need a couple of things:
- Node.js installed on your computer
- Some kind of code editor (optional) – a popular option is VSCode
- A command-line-interface to run our application – on Windows, Command Prompt will do
Go ahead and install Node.js. Once done, let’s begin by setting up our project.
- Open up a Command Line Interface (CLI) such as Windows Terminal, PowerShell, Command Prompt, Bash, etc.
- Browse to wherever you want to store your project code (I use C://src or C://www for all my projects).
- Create a new directory for the project named “build-api-tutorial” and lets go into the root of our new directory.
In Windows Terminal / PowerShell, I’m using the command “cd PATH” to browse directories and creating a new directory using “mkdir NAME”. You don’t have to do this in a CLI of course, just create a new folder using your standard file browser. Here’s a list of CLI commands you might find useful.
Using your CLI, go to the root folder of the project. Let’s run the command “npm init”. NPM (aka node package manager) ships with Node.js out of the box and allows us to pull different packages.
You’ll be prompted to enter info relating to the project. For this tutorial we can stick with leaving things as is, hit enter/return until the prompts have cleared.
Great, you should now see a new file called package.json in the folder. The last thing we need to do before getting to the code is install express. Run the command “npm install express” in your CLI. Once installed, you should be greeted with something like:
Next create a blank “index.js” file and save it to this directory from your code editor. Your folder structure should now look like this:
“node_modules” contains all of our external packages that we installed (including Express and its dependencies). “package-lock.json” contains a list of our dependencies and the versions locked to our project. With set-up out of the way, it’s time to get started!
Getting Started
To begin, we’re going to use the “Hello World” example described on the Express website. Copy and paste the following snippet into your index.js file and hit save.
const express = require('express')
const app = express()
const port = 3000
app.get('/', (req, res) => {
res.send('Hello World!')
})
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
Copy code
Go back to the CLI and type “node index.js”. This tells Node.js to run the index file containing our Express API server. You should see the following message which lets us know that the server is running locally:
Open up a browser, go to “localhost:3000” and you should be greeted with the following message:
Awesome! You have just built your first web API in 11 lines of code 😎
The client/browser requests an endpoint (http://localhost:3000/), the API server receives the request, and returns a message, in this case “Hello World!”.
You can shut down the server by opening up your CLI and holding the “Ctrl” key and pressing “C”. Make sure you restart the server before trying to run any updated code.
Defining the Data Model
As we are going to build an API dealing with PPA deals, it’s important to define what that data actually looks like.
PPA deals typically have the following in common:
- A buyer and seller of electricity
- The technology used (e.g. solar/wind) and capacity of the installation
- The duration of the contract and the country and date where a deal took place
Working with a database is outside the scope of this tutorial, so what we are going to do instead is mock one. We are going to take the key bits of information above and store the data in memory as JavaScript objects like so:
// An example PPA deal object
{
id: 1, // a unique identifier for this deal
seller: 'Company X',
buyer: 'Comany Y',
country: 'An example country',
technology: 'Solar, wind, etc',
capacity: 42,
term: 'xx years',
date: '2021-07-21',
}
Copy code
The object above is comprised of a “key” – e.g. “buyer”, “seller”, etc. And its corresponding “value” – Company Y”, “Company X”, etc.
An aside: When using actual databases, storing data as objects with key-value pairs like this exist. They are typically referred to as NoSQL-type solutions. Of course there is many different paradigms and ways of storing data. One of the most common being SQL-type databases which share a lot of similarities to the humble spreadsheet. In this type of database solution, columns represent attributes (e.g. seller, buyer, country) and rows represent each record with a corresponding set of values.
Cool, now we have a structure for how we want to store and represent data. Let’s implement it on our API.
What we will do first is mock a database by creating a variable which stores an array (or list) of our PPA deals. Each deal stored in our array will have the object structure from above.
Open up “index.js” and edit your code to the following:
const express = require('express');
const app = express();
const port = 3000;
app.use(express.urlencoded({ extended: true }));
app.use(express.json());
// Data array containing mock PPA deal information
let data = [
{
id: 1,
seller: 'Generic Utility Co',
buyer: 'Buyer Industries',
country: 'Germany',
technology: 'Solar',
capacity: 15,
term: '12 months',
date: '2021-07-07',
},
{
id: 2,
seller: 'Generator X',
buyer: 'XYZ Tech Corp',
country: 'Belgium',
technology: 'Offshore Wind',
capacity: 500,
term: '5 years',
date: '2021-07-07',
},
{
id: 3,
seller: 'Another Power Seller',
buyer: 'Large Corporate Co',
country: 'France',
technology: 'Onshore Wind',
capacity: 50,
term: '12 months',
date: '2021-07-06',
},
{
id: 4,
seller: 'Generator X',
buyer: 'Large Corporate Co',
country: 'United Kingdom',
technology: 'Solar',
capacity: 20,
term: '5 years',
date: '2021-07-06',
},
{
id: 5,
seller: 'ABC Energy 123',
buyer: 'XYZ Tech Corp',
country: 'Spain',
technology: 'Solar',
capacity: 150,
term: '10 years',
date: '2021-07-05',
},
];
// Let's print our data to console
console.log(data);
app.listen(port, () => {
console.log(`Server listening at http://localhost:${port} ⚡`);
});
Copy code
Hit save. Then let’s fire up the API by typing “node index.js” in your CLI. You should see something like this:
Here we are just logging/printing the data to screen to check everything is working as expected. With our “database” created, it’s time to move on to the fun part.
Creating Our API Endpoints
In designing our API, we’re going to follow RESTful standards. Microsoft have a great article on the topic if you want to learn more. For our purposes, the API we want to build has the following functionality:
- Return a list of PPA deals, which can optionally be filtered
- Get details about a specific PPA deal
- Create a new PPA deal
- Update an existing PPA deal
- Delete a PPA deal
Translating these requirements into API routes (or endpoints) using a REST approach would yield the following logic:
// Get a list of PPA deals - GET "/api/deals"
app.get('/api/deals', async (req, res) => {
// get something
});
// Get a specific PPA deal - GET "/api/deals/{id}"
app.get('/api/deals/:id', async (req, res) => {
// get something specific
});
// Create a new PPA deal - POST "/api/deals"
app.post('/api/deals', async (req, res) => {
// create something
});
// Update an existing PPA deal - PATCH "/api/deals/{id}"
app.patch('/api/deals/:id', async (req, res) => {
// update something
});
// Delete a PPA deal - DELETE "/api/deals/{id}"
app.delete('/api/deals/:id', async (req, res) => {
// delete something
});
Copy code
Above we’re using the same function used in the “Hello World” example to define the different endpoints of our API.
These endpoints represent specific URLs that we can call on our API with various HTTP methods (e.g. GET, POST, PUT, PATCH, DELETE).
Depending on how we call the endpoint, and what variables we pass through with our request, allows the server to respond in different ways.
Let’s look at an example – update your code to the following:
const express = require('express');
const app = express();
const port = 3000;
app.use(express.urlencoded({ extended: true }));
app.use(express.json());
let data = [
{
id: 1,
seller: 'Generic Utility Co',
buyer: 'Buyer Industries',
country: 'Germany',
technology: 'Solar',
capacity: 15,
term: '12 months',
date: '2021-07-07',
},
{
id: 2,
seller: 'Generator X',
buyer: 'XYZ Tech Corp',
country: 'Belgium',
technology: 'Offshore Wind',
capacity: 500,
term: '5 years',
date: '2021-07-07',
},
{
id: 3,
seller: 'Another Power Seller',
buyer: 'Large Corporate Co',
country: 'France',
technology: 'Onshore Wind',
capacity: 50,
term: '12 months',
date: '2021-07-06',
},
{
id: 4,
seller: 'Generator X',
buyer: 'Large Corporate Co',
country: 'United Kingdom',
technology: 'Solar',
capacity: 20,
term: '5 years',
date: '2021-07-06',
},
{
id: 5,
seller: 'ABC Energy 123',
buyer: 'XYZ Tech Corp',
country: 'Spain',
technology: 'Solar',
capacity: 150,
term: '10 years',
date: '2021-07-05',
},
];
// Hello World Example Route
app.get('/', (req, res) => {
res.send('Hello World!');
});
// Get a list of PPA deals
app.get('/api/deals', async (req, res) => {
try {
return res.status(200).json({ data });
} catch (e) {
console.log(e);
}
});
app.listen(port, () => {
console.log(`Server listening at http://localhost:${port} ⚡`);
});
Copy code
Save and and run the server (CLI -> “node index.js”). In a browser go to “localhost:3000” and you should be greeted with a familiar message. When you go to “localhost:3000/api/deals” however you should see this:
The API receives a GET request from the browser for the endpoint “/api/deals” and responds with the PPA data we defined earlier. Protip: the above might look like a mess on your browser depending on if you have a JSON formatter extension installed or not. We’re going to be using a dedicated API tool later in the tutorial so don’t worry about it if that’s the case.
Coding the Business Logic
Now that we have defined what we expect our API to do and the corresponding routes, it’s time to flesh out the business logic.
In general, when an API receives a request, the following takes place before an action is performed (not necessarily in this order):
- Routing: Did the request hit a valid/existing endpoint with the correct method (GET, POST, etc)? If yes, carry on, otherwise return an error.
- Auth: Is the endpoint “protected” in any way? For example, the endpoint might only be allowed for certain users to access such as when deleting data. The API checks that the request has valid permissions to perform the action (authenticated/authorising the request). Unauthorised requests are rejected.
- Validation: Is the endpoint expecting any variables or a payload? If they are missing or erroneous (e.g. wrong format, data type, etc) the API returns an error.
- Security: Is the request compliant with all security measures? For example are the requests adhering to the rate at which the API is allowed to be called (e.g. 100 requests / minute).
Once a request has gone through the checks above, our server is then free to get down to business and deal with it.
Let’s flesh out our API by adding some basic validation logic and business logic (which deals with the “database” itself) to the endpoints described previously. Note, the coding itself is outside the scope of this article but if you would like to learn more about coding then I highly recommend freeCodeCamp as a resource. Copy and paste the final bit of code from below (or alternatively from our GitHub page) into “index.js”.
/**
* Set-up
* Importing modules and configuring some settings
*/
const express = require('express');
const app = express();
const port = 3000;
app.use(express.urlencoded({ extended: true }));
app.use(express.json());
/**
* Data
* Normally we would fetch and store data via a database or file-system
* For this tutorial we're keeping it simple and creating mock data in memory
*/
// Data array containing mock PPA deal information
let data = [
{
id: 1,
seller: 'Generic Utility Co',
buyer: 'Buyer Industries',
country: 'Germany',
technology: 'Solar',
capacity: 15,
term: '12 months',
date: '2021-07-07',
},
{
id: 2,
seller: 'Generator X',
buyer: 'XYZ Tech Corp',
country: 'Belgium',
technology: 'Offshore Wind',
capacity: 500,
term: '5 years',
date: '2021-07-07',
},
{
id: 3,
seller: 'Another Power Seller',
buyer: 'Large Corporate Co',
country: 'France',
technology: 'Onshore Wind',
capacity: 50,
term: '12 months',
date: '2021-07-06',
},
{
id: 4,
seller: 'Generator X',
buyer: 'Large Corporate Co',
country: 'United Kingdom',
technology: 'Solar',
capacity: 20,
term: '5 years',
date: '2021-07-06',
},
{
id: 5,
seller: 'ABC Energy 123',
buyer: 'XYZ Tech Corp',
country: 'Spain',
technology: 'Solar',
capacity: 150,
term: '10 years',
date: '2021-07-05',
},
];
/**
* Services
* These functions handle the business logic needed by the API
*/
// Validates that a specific PPA deal identifier exists
const checkDealIdExists = async (id) => {
const check = data.filter((deal) => deal.id == id);
if (check.length > 0) {
return true;
}
return false;
};
// Helper function to filter deals based on query parameters
const filterDeals = (deal, query) => {
if (!query) return true;
for (const [key, value] of Object.entries(query)) {
if (deal[key] != value) return false;
}
return true;
};
// Checks that incoming data has all expected properties and that they aren't empty
const validateData = async (payload) => {
const propertiesToCheck = ['seller', 'buyer', 'country', 'technology', 'capacity', 'term', 'date'];
for (let i = 0; i < propertiesToCheck.length; i++) {
if (!payload.hasOwnProperty(propertiesToCheck[i]) || payload[propertiesToCheck[i]].length == 0) {
return false;
}
}
return true;
};
// Retrieve a list of deals
const getDeals = async (query) => {
// If query parameters exist, filter the deals returned to those that match
if (query) {
return data.filter((deal) => filterDeals(deal, query));
}
return data;
};
// Retrieve a specific deal based on a deal identifier
const getDealById = async (id) => data.filter((deal) => deal.id === parseInt(id, 10));
// Create a new deal in the mock database
const createDeal = async (payload) => {
// Find the largest id existing in table and increment by 1
const id = data.map((deal) => deal.id).reduce((a, b) => Math.max(a, b)) + 1;
// Create the deal based on information passed through the API
const deal = {
id,
seller: payload.seller,
buyer: payload.buyer,
country: payload.country,
technology: payload.technology,
capacity: parseInt(payload.capacity, 10),
term: payload.term,
date: payload.date,
};
data.push(deal);
return deal;
};
// Update a specific deal in the mock database
const updateDeal = async (id, payload) => {
// Find the index of the record to update
const index = data.findIndex((deal) => deal.id == id);
// Update the deal based on information passed through the API
data[index] = {
id,
seller: payload.seller,
buyer: payload.buyer,
country: payload.country,
technology: payload.technology,
capacity: parseInt(payload.capacity, 10),
term: payload.term,
date: payload.date,
};
return data[index];
};
// Delete a specific deal in the mock database
const deleteDeal = async (id) => {
data = data.filter((deal) => deal.id != id);
return data;
};
/**
* Routes / Controllers
* Here we describe the endpoints for the API and how they are handled
*/
// Hello World Example Route
app.get('/', (req, res) => {
res.send('Hello World!');
});
// Get a list of PPA deals
app.get('/api/deals', async (req, res) => {
try {
// Retrieve a list of deals for a given query
const deals = await getDeals(req.query);
// Respond with the deals that matched our query
return res.status(200).json({ data: deals });
} catch (e) {
console.log(e);
}
});
// Get a specific PPA deal
app.get('/api/deals/:id', async (req, res) => {
try {
// Check that an ID exists in the database
const checked = await checkDealIdExists(req.params.id);
// Return an error if it isn't
if (!checked) return res.status(400).json({ error: 'Could not find this id' });
// Otherwise respond with data for this specific deal
const deal = await getDealById(req.params.id);
return res.status(200).json({ data: deal });
} catch (e) {
console.log(e);
}
});
// Create a new PPA deal
app.post('/api/deals', async (req, res) => {
try {
// Check that the incoming data is valid
const validated = await validateData(req.body);
// Return an error if it isn't
if (!validated) return res.status(400).json({ error: 'Empty or missing properties and/or values' });
// Create the deal in the mock database
const createdDeal = await createDeal(req.body);
// Respond with the newly created deal information
return res.status(201).json({ data: createdDeal });
} catch (e) {
console.log(e);
}
});
// Update an existing PPA deal
app.patch('/api/deals/:id', async (req, res) => {
try {
// Check that an ID exists in the database
const checked = await checkDealIdExists(req.params.id);
// Return an error if it isn't
if (!checked) return res.status(400).json({ error: 'Could not find this id' });
// Check that the incoming data is valid
const validated = await validateData(req.body);
// Return an error if it isn't
if (!validated) return res.status(400).json({ error: 'Empty or missing properties and/or values' });
// Update the specific deal with the new information
const deal = await updateDeal(req.params.id, req.body);
// Respond with the updated deal information
return res.status(200).json({ data: deal });
} catch (e) {
console.log(e);
}
});
// Delete a PPA deal
app.delete('/api/deals/:id', async (req, res) => {
try {
// Check that an ID exists in the database
const checked = await checkDealIdExists(req.params.id);
// Return an error if it isn't
if (!checked) return res.status(400).json({ error: 'Could not find this id' });
// Delete the specified deal
const deals = await deleteDeal(req.params.id);
// Respond with the most up to date deal information
return res.status(200).json({ data: deals });
} catch (e) {
console.log(e);
}
});
/**
* Start the API 🚀
*/
app.listen(port, () => {
console.log(`Server listening at http://localhost:${port} ⚡`);
});
Copy code
You will notice that there’s a fair amount of additional code! In addition to the “database” and routes we defined earlier, you will notice a bunch of new functions (services) which perform concrete actions such as validating an input or making a change to the database. Near the bottom are the routes we defined earlier, and these encapsulate the logic of how our API works. In production-grade code, this “controller” logic would typically be separated from the routes themselves – but for demo purposes this works well.
As a walkthrough example: If we look at the the “Create a new PPA deal” endpoint you can see that “/api/deals” accepts a POST request. You can see that our endpoint is expecting to receive a body of data from that request, which it is then validating using the “validateData” service. This service will check that all keys expected for the data exist (e.g. buyer, seller, technology, etc.). If it doesn’t, then the API returns an error. If the body of data is valid, then we add an entry to the database using the “createDeal” service. If that is successful, then the API responds with a message confirming that the data was inserted.
There’s a number of comments included to try and explain what each piece of code is doing. If anything is unclear please feel free to reach out or comment.
Testing Our API
With our API built, it’s time to get testing! Boot up the server one last time (CLI -> “node index.js”). Next, we’re going to grab a popular API testing and development tool called Postman. Postman will make it super easy to test and play with our API, so head over to Postman to create an account. Once logged in, let’s create an API request.
On the next screen, add “localhost:3000/api/deals” in the URL field and hit “Send”. You should see the data we defined earlier.
Great, that route seems to work! Let’s try something new, update the URL to the following and hit “Send”:
localhost:3000/api/deals?technology=Solar
Copy code
See what happened? You’ll notice that only three PPA deals were returned this time (versus the five we originally described). We have asked our API to process any “query parameters” included the URL and use these to filter the database against. The three records shown are now filtered to PPA deals using solar technology. Lets try adding another query parameter, copy and paste the following URL:
localhost:3000/api/deals?technology=Solar&country=United Kingdom
Copy code
Now we only receive a single data object. A PPA deal originating in the UK with solar as the technology. This type of filter logic helps applications which deal with a large amount of records and/or fields. You can imagine that we might create a front-end app or BI dashboard which connects to this API and is able to filter results interactively.
Let’s try out some of the other routes. Copy the following and hit “Send”:
localhost:3000/api/deals/3
Copy code
You will notice that only a single record which has an “id” of 3 is returned. The above is called a “route” or “template” parameter. The variable itself is included as part of the URL directly, and not constructed like a query parameter – i.e. “api/endpoint?key=value”. Let’s try a new method, this time, instead of a GET request to the URL above, let’s send a DELETE request.
Here we are instructing our API to delete the PPA record with a unique identifier equal to 3. Hit “Send”. If you call the “/api/deals” endpoint with a GET request again, you should see that only four records are now shown. We have successfully deleted a record from our database.
How about creating a new record? Set-up a POST request and aim it at the following route:
localhost:3000/api/deals
Copy code
If you hit send now, you should see this error message:
When we’re calling this particular route, the API is expecting a request “body”. The body contains the data with which we want to create a PPA deal for. Go ahead and copy the following snippet:
{
"seller":"Some seller",
"buyer":"A buyer",
"technology":"Onshore Wind",
"capacity":"500",
"country":"Belgium",
"term":"5 years",
"date":"2021-07-21"
}
Copy code
Click on the “Body” tab, click on “raw”, then update the data type to “JSON”. Paste the snippet in the box below.
When we send this request, our API is going to validate the data received, and create a new record if all checks pass. If all good, the API will respond with the new record that was created. Hit “Send” and you should receive a response with the newly created record, and a corresponding “id” field for the record. We’ve just successfully inserted data into our database, nice!
Updating a record works in much the same way as creating one, you just need to send a PATCH request to the “/api/deals/{id}” endpoint. By including “id” in the route, you’re specifying which record you would like to update. This endpoint also expects a body object similar to the one above. Give it a try!
And that concludes the testing phase. There’s a lot of other things we can do to test and improve the API, but as an introduction to the technology, we’ve covered a lot of ground.
In Closing
And there you have it! In this tutorial we defined, built, and tested an API from scratch using our PPA use-case as an example. I hope the tutorial was useful in explaining a bit more about how they work and also in showcasing the power of APIs for both internal and external use. All of the code shown in this tutorial is accessible over at the re.alto Github page. Happy coding!
Thank you
You have been subscribed for the newsletterTutorial: Visualising Solar Forecast Data Using Python and Web APIs
Development
This tutorial will shown you how to visualise solar forecast data via Python and APIs
APIs/Development

Web APIs are becoming an increasingly popular standard for communicating data. The days of CSV downloads, file transfers, and even web-scraping are numbered, and for good reason. Before APIs can be used and integrated into any system or process, however, there’s usually an element of investigation and play that needs to happen.
What You'll Need For This Tutorial
Understanding how an API works, what data it contains, how the data is structured and formatted are all activities which have to take place upfront. One technique which helps with this is the process of collecting and transforming data into a state which is ready to be visualised. That’s what we’ll cover today 😎
For this tutorial, we’re going to create a Python script which pulls Belgian solar forecast data via an API, post-processes it into a Pandas dataframe and finally visualises the output into a time series graph.
Why solar forecasts? Well solar forecasts are an interesting time-based dataset to work with since they’re critical for many different actors within the energy sector. For example grid operators, renewable asset owners/operators/developers, traders, smart buildings, energy management systems, the list goes on and on.
So let’s begin!
What you’re going to need
- A Python IDE (I personally use Spyder)
- Postman – a simple-to-use app which allows us to interact with APIs easily
- An account on re.alto – an API marketplace for the energy sector
- A subscription to Elia’s solar forecasting API
Querying the Elia solar forecast API using Postman
Before we get into any coding, let’s first take a look at the API which Elia (the Belgian Transmission System Operator) has created. On the technical operations page you can see that there’s only one operation/endpoint to integrate and a variety of parameters we can play with.
Let’s boot up Postman and create a request using the following query parameters:
We also need to include an API subscription key (retrieved under the “My Subscriptions” area on the re.alto portal). This needs to be included in your request header with the following key-value pair:
“OCP-Apim-Subscription-Key” : “YOUR-API-TOKEN“
Hit run and your Postman client should get a response. For example:
Okay great, so now we have a fairly good idea of what data the API contains and also what post-processing work we’ll need to be done before we can visualise the output.
Step One - Setting Up the Python Script and Details
Over to our Python script, let’s start of by defining our imports and variables.
## script.py
# imports
import requests
import pandas as pd
import re
import matplotlib.pyplot as plt
import seaborn as sns
# inputs
token = ‘YOUR_API_TOKEN_HERE’
endpoint = ‘https://api.realto.io/elia-sf-BE/GetChartDataForZone’
sourceId = 1 # 1 = Belgium
dateFrom = ‘2021-03-07’
dateTo = ‘2021-03-15’
A note on some of the packages here, we’re going to be using “requests” to actually call the API. “re” will be used for running a RegEx query which is needed for our post-processing. Pandas will be used to create the data structure and finally seaborn will be used to create the timeseries plot at the end.
Step Two - Fetching and Preparing the Data
Next, let’s query the API, convert and set-up our dataframe:
# call API and set-up DataFrame
response = requests.get(endpoint, headers={‘OCP-Apim-Subscription-Key’: token}, params={‘dateFrom’:dateFrom, ‘dateTo’:dateTo, ‘sourceId’:sourceId})
json = response.json()
data = json[‘SolarForecastingChartDataForZoneItems’]
df = pd.DataFrame(data)
Here we’re calling the API and then taking the “SolarForecastingChartDataForZoneItems” array and converting it into a Pandas DataFrame. Let’s take a look at what that produces:
Step Three - Plotting a Time Series Graph and Visualising the Output
One last thing we need to do is append a proper timestamp to each row. You can see that a Unix epoch DateTime currently lives in the StartsOn column in a JSON syntax. Let’s go through each row, parse the DateTime string and pull the numbers out (using RegEx), convert this into a Pandas timestamp, and finally append the new column to the DataFrame.
# create timestamps
dates = []
for index, row in df.iterrows():
dates.append(pd.to_datetime(re.findall(“\d+”, row[‘StartsOn’][‘DateTime’])[0], unit=’ms’))
df[‘DateTime’] = dates
Once that’s done the data is ready to be visualised!
Plotting the Graph:
There’s a couple of Python packages that exist for visualising data- the one we’re using is a package called seaborn. Check out some of the visuals in their examples gallery for inspiration. Since our DataFrame is set-up, getting a timeseries graph is simply a case of defining what columns we want to plot + any settings we want to play with (e.g. colours, titles, sizes, etc). Let’s keep it simple:
# visualising
new_df = df[[‘WeekAheadForecast’, ‘DayAheadForecast’, ‘MostRecentForecast’, ‘RealTime’, ‘DateTime’]]
new_df.set_index(‘DateTime’, inplace=True)
sns.set_style(“darkgrid”)
fig, ax = plt.subplots(figsize=(10, 10))
sns.lineplot(data=new_df, ax=ax, palette=”tab10″, linewidth=2.5)
This outputs the following graph:
You can see the forecasts become more accurate the closer you are to the forecast date… Who would have guessed?
And voila, that’s it!
In Summary:
We queried the solar forecast API, post-processed the data, and visualising the output on a time series graph.
We hope this tutorial proved useful to you. Feel free to reach out to us with any questions you may have.
The full Python script
You can find a copy of our script on our GitHub page.
Thank you
You have been subscribed for the newsletterWhat is an API?
Development
The term API is an acronym. It stands for “Application Programming Interface.”
Energy APIs

An API is a vital building block in any digital transformation strategy, and one of the most valuable in achieving scale, reach and innovation. Behind every mobile app and online experience, there is at least one API. At its most basic level, an API is used to integrate diverse systems by acting as the communication interface which allows two different web-based applications to exchange data over a network connection without the need to merge physical operational infrastructure.
Companies of any size can use APIs for many operational solutions, from analytics to online payments. APIs are the set of protocols which make an organisation’s data and services digitally available to external developers, partners and internal teams over the internet, creating a two-way path for data, digital products and services and a seamless cross-channel experience.
How do APIs work?
One of the most commonly used analogies for APIs is that of a restaurant waiter. It glosses over many of the complexities but it’s useful for understanding the basics.
In a restaurant, the waiter gives you a menu of dishes available to order as well as a description of each item. You choose what you’d like to eat, and the waiter sends the message to the chef who prepares the food. The waiter then brings it straight to your table. You don’t know how the chef made the food or what happened in the kitchen – the waiter is the communication link between you and the kitchen, and he delivered exactly what you ordered.
For developers, an API works in a similar way to that waiter – it is a communication channel delivering an exchange of data or services. An API lists a range of operations which developers can use, along with a description of what each can do. The developer knows he needs a particular function in the app he’s building, so he chooses one of the API components which will deliver that functionality from the API provider and integrates it into his own application.
APIs in Everyday Life
Just think of the number of applications you use on a daily basis which have an embedded Google map or involve getting directions. Chances are that those applications are using the Google Maps API or Google Maps Directions API, which allow developers to access Google’s static and dynamic maps and street views.
If you’ve ever used PayPal to pay for something online, you’ve done so courtesy of an API. When you click ‘Pay with PayPal’, the e-commerce application sends an order request to the PayPal API with the amount owed and other required details. The user is then authenticated through a pop-up and, if all is ok, the PayPal application sends payment confirmation back to the application.
Amazon released its own API to enable developers to easily access Amazon’s product information so that third party websites can post direct links to products on Amazon.com with the ‘Buy Now’ option. And streaming services like Spotify use complex APIs to distribute content across different platforms.
I'm Building an App - How Can APIs Help?
There are a number of advantages to an API-led approach as a developer, but efficiency and innovation are top of the list.
APIs reduce the quantity of code developers need to write themselves when building software, so the faster approach means greater efficiency, both in time and budget. If, for example, a digital application required a weather forecast function, a developer can simply integrate one of the many weather APIs available online rather than needing to build an entire meteorological system from scratch.
By removing any barriers through the intelligent use of APIs, new functionality, or indeed entirely new digital products and services, can be developed faster for rapid innovation. In an increasingly fast-moving market such as energy, the capability to move quickly, respond to new challenges and stay competitive is crucial.
My Company Doesn't Have an API. Should We Build One?
As we’ve seen, APIs should be a key component of any long-term digitalisation strategy.
APIs can be private or public. Private APIs are used only by a company’s internal developers and are not available to third parties. They are a useful tool to integrating and streamlining your own internal digital processes, creating greater efficiencies and potentially speeding up product time to market through faster development.
So private APIs are useful, but the benefits of making them public have even greater potential. Public, or Open, APIs allow you to extend your data, digital products and services beyond your own boundaries into new markets, acting as a revenue stream and new sales channel. They enable customers to integrate directly with your systems in a flexible way which works for them, building the foundation for a mutually beneficial commercial partnership.
Let’s go back to the example of Google Maps. Between 2016 to 2018, Uber paid Google $58million for the use of Google Maps in its app to help drivers navigate and visualise the journey for customers. That’s a significant discount on Google’s normal rate for use of its Maps API for a good reason – market visibility. Just think of the millions of Uber customers using Google Maps daily within the Uber app. Public APIs can increase your own reach and attract new customers. Not only that, but by charging for usage of your APIs, you are better positioned to monetise your own data and services.