Domain Model and Security Principles of the re.alto API Platform
Energy APIs
This article is intended as a guide/intro for developers/architects using our API platform.
20.01.2025
Development

Introduction
The vision of re.alto is to support businesses in developing outstanding products by providing APIs and energy-driven data solutions to help build digital products faster. We do this by connecting to devices through their existing IoT connectivity. At the core of our solution is a powerful IoT management platform. It connects to any type of device, streams device data in real-time and securely stores it for future retrieval. The platform can stream thousands of data sets per second, it can aggregate readings, it can retrieve charge data records (where available), and it can be used to manage and steer devices. Integration is also straightforward.
The guide below explains our domain model, the terminology used and our IoT platform’s security principles.
Domain Model Components (Terminology and Set-Up)
The platform is structured in Tenants. Tenant refers to a customer environment. Every Tenant has an administrator. The Tenant admin controls everything within that Tenant. The Tenant admin can be either a person or a program/app, which are known as Principals of type User or Client respectively. A Client is usually used by a backend system/process like an app and has a Client ID and a Client secret. A User logs in using an email and password. It is this Client ID or User ID that defines what you have access to see on the platform. The Tenant admin is also a Principal (which is therefore either a Client ID or a User). Members are Clients or Users that are part of a Tenant but are not admins. Members also either have a Client ID or a User ID, however members cannot remove/add themselves or other members to a Tenant, only the Tenant admin has the right to do this.
In each Tenant, the Tenant admin can onboard devices which we refer to as Entities. An Entity is added in the system via an onboarding request raised by a Principal with access, which also becomes the owner unless a different owner is specified in the request. Any sort of device that we onboard becomes an Entity and receives an Entity ID. Each Entity has an owner. The Entity owner has the right to change its properties. Members have reduced rights and can read the data but cannot alter the properties of an Entity.
Entities can be grouped together in Collectives. A Tenant can have multiple Collectives, making it easy to separate different Entities into groups (depending on company they belong to, for example). Entities that are grouped together in Collectives can be displayed together. Each Collective has an owner that is assigned by the Tenant admin, and multiple members can be added to each Collective, all of whom then have rights to see the data of the Entities within that Collective. “Collective” refers to a group of Entities and of Users who are members of a Collective. A Collective of Entities has a Collective owner and Collective members. The data from all Entities in a Collective can be shared with a number of Principals (User or Client IDs). The owner of the Collective can set certain parameters on an Entity, such as its name. Members can only use the Entities (ie: read their data).
The Collective is a powerful tool to link various Entities together and then share the data with other people or programs. For example, a fleet manager could use a Collective to conveniently see the data from all of their company’s vehicles in one place. However, a Collective could also refer to a household with multiple cars, a heat pump etc, and any member of that Collective could then view the data from all Entities within that Collective.

Security
The security principles are based on the domain model explained in the first part of this article. You must be the Tenant owner/admin or member of the Tenant, or the Collective owner or member of the Collective, to be able to see the data of a device. To authenticate against our platform, a Client ID or User ID is required. Once you have that, you must be the owner or member of a Tenant or Collective in order to access data. Every individual record, Tenant, Entity and Collective is secured with these security rights. The only way to access our platform is to have a Principal ID, which is either the Client ID (for programs) or the User ID (for people). This ID is either a member of a Tenant or a Collective, or the owner of an Entity. This determines whether you can see that Entity and its data and do something with this data or not. If you do not have rights to any Entities, Tenants or Collectives, you won’t be able to view any data.
re.alto’s customer can have one Tenant on our platform but organise onboarded Entities into various Collectives within that Tenant. This means if Company A is working with various companies/fleet managers, for example, they can onboard the vehicles from various companies and organise each of these into their own Collective, meaning each company/fleet manager will only be able to see the data from the cars in their respective Collective and not the data from cars organised into a separate collective by Company A. Any vehicle added to the Collective later can also easily be viewed without any additional work – that is the power of the Collective. Company A is the owner of the Collective within their Tenant, but they can make Fleet Manager A a member of a Collective and assign them rights within that Collective, so that they can see data from vehicles within the Collective. But they will remain unable to view data from vehicles in the other Collectives within Company A’s Tenant. You have to be a member of a specific Collective to see the data from entities within that Collective – and that is where the domain model meets the security model.
Thank you
You have been subscribed for the newsletterSelling Energy Data: Adopting an API-as-a-Product Mindset
Energy APIs
API-as-a-Product: What to consider when marketing your energy data API
16.12.2024
Energy APIs

An API, or application programming interface, is a communication channel between two web-based applications, allowing the exchange of data without any connecting physical infrastructure. APIs have enormous potential to open companies up to new revenue streams, unlock new markets and extract value from existing assets. They are a crucial channel to exploit data which has in some cases previously been sitting unused or siloed. To harness and maximise the value of this data, the API must be viewed as a product in itself. To fully realise this potential, APIs need to be lifted out of the sole domain of the IT department and treated as a digital product in their own right. Adopting such an API-as-a-product mindset transforms APIs from a project-specific internal IT tool to the very cornerstone of digital transformation, relevant suddenly to wider business opportunities. If you have energy data, for example, you could be earning extra revenue by following a modern data monetisation strategy and marketing that data as a product.
If APIs are to be considered as a digital product, they need features such as a product description, a usage licence and pricing model just like any other product brought to market as part of an overall business strategy. It certainly isn’t the case that all digital assets available through a company’s existing API are naturally of monetary value to a broad external audience. The initial commercial success of any product depends on whether there is a clearly defined market and customer need. As with any product, you need to look at the target audience and their specific needs – to whom would this data be of interest, for what pain point does it provide a solution and what value does it offer? Exclusivity in data and functionality is always a good place to start, but it is not essential if the offering still delivers value to a particular audience. As an example, an API providing a forecast of day-ahead electricity prices is obviously only relevant to industry players with short-term operations that enable or rely on them reacting quickly, not to largescale industrial consumers on three-year fixed price contracts. The key is to know and understand your audience and their requirements. It is also worth being aware that when it comes to defining the market for APIs, there is usually both an internal and external audience. Internally, an API can be used within an organisation to transform operational processes and integrate disparate systems. Externally, the developer community is most often the direct API user. APIs enable developers to leverage data to build new applications at pace and at scale by removing the need to write entirely new code. They are the building blocks used by the developer community to create other products, which will inevitably in turn evolve in line with end user need. Within the energy industry, APIs are key in facilitating the sharing of data which has previously been inaccessible and fragmented, opening collaboration and innovation across the entire energy value chain. User need for APIs will inevitably vary depending on the nature of the audience, so determining what features they require should be the first step in any product roadmap. By taking this approach with APIs, it is possible to transform what has previously been a technical asset used for short-term, finite internal projects into an agile digital product with long-term commercial value. In return, API usage analytics then offer the provider a valuable source of insight into performance and can be used to update the product roadmap as necessary. API response times, number of calls and usage patterns, among other metrics, generate feedback which enables the API provider to better understand what the audience needs and adapt their product accordingly.
On a final note, the success and value of an API will ultimately depend on customer experience and ease of use. Key considerations for APIs with external audiences include security – clear policies and protocols to protect and control the data – straightforward integration processes and access to relevant documentation. These all maximise productivity for developers and are easily handled by offering the well-designed API through a digital marketplace. This is where the re.alto API Marketplace comes in – it acts as a sales channel through which you can market your energy-related API to potential customers. Consumers can browse a variety of different APIs on our marketplace and then easily subscribe to the ones that interest them. These subscriptions are monitored, tracked and (if monetised) billed and settled by the re.alto platform, simplifying that side of the sale for the provider. The owner of the API retains full control over who has access to their product with re.alto simply acting as a billboard and sales channel for their data.
Thank you
You have been subscribed for the newsletterAPI Marketplace Fix (Following Elia API Updates)

Provider updates made to some of the APIs on our portal earlier this year caused them to suddenly stop working for our Marketplace users. One of re.alto’s developers quickly explored the reasons why and successfully found a solution for our users, resolving the issue efficiently for those actively using the APIs. More below.
The re.alto API Marketplace enables consumers to find and integrate with third-party energy data via APIs and offers providers a platform to easily advertise and sell their data. In the rare case that access to an API no longer functions as it should, re.alto’s developers will resolve the issue as soon as possible. A recent example of this was when one of the providers on our marketplace, Elia, made changes to the structure of their APIs earlier this year, leading to some technical issues for those trying to call the APIs from the re.alto platform. The issue was flagged and re.alto quickly began investigating and working towards a solution.
The issue occurred on 22nd May 2024 when Elia changed their public APIs to make them more powerful with the downside that the APIs became fragmented. The Elia databases are very large, and they greatly improved the versatility of their APIs by updating them to enable users to be far more specific when filtering for data, something which is quite rare in APIs and offers the user far more control in specifying exactly which dataset they want to view. With such a large amount of data available (quarter-hourly data, minutely data, historical data, near real-time etc.), this separating of the data via different URLs based on time frame makes the quantity of data far more manageable for many of the users seeking a specific dataset. Normally, users would receive a large amount of data and then have to filter it themselves on their end, depending on their needs. Via Elia, users can now filter the desired data before receiving that data (for example group it, order it, offset it, limit it). This gives more power and control to the user. But in making the APIs more versatile, the data was split up into new categories, with the consequence being that users now needed to use several different URLs to separate databases to obtain the same kind of data that was previously available through a single URL. Via Elia, users are now required to make calls to different APIs depending on the date, time frame and type of data that is required. For example, Elia has now split their Imbalance Prices API (one of the more popular ones on our marketplace) into six separate APIs, depending on the date/type of data required. There are imbalance prices per quarter hour and per minute as near real-time data but also as historical data, and some of these categories are then split into different tables again depending on whether the data is from before or after the changes were implemented on 22. May 2024 (for example, historical data before or after this date). So, where there used to be one API for this huge amount of data, the data is now found in various data tables each reached via a different API/URL. The update to the APIs on Elia’s side made our implementation of the Elia APIs suddenly unusable because the URL had changed and the data had been split into many different tables.
The question re.alto then faced was how we could continue to provide this data through our marketplace without our customers having to do multiple API calls to various separate data tables on Elia’s side. The users of re.alto’s marketplace value simplicity in obtaining data. We wanted to resolve the issue by keeping the API as simple and familiar to use as possible for our users and therefore maintain the value in our implementation. Before the changes, users could specify the day they wanted data from, and the API would provide all the data from this date. While updating the URL in our API gateway would have been a simple fix, we would have then had to add various APIs to the marketplace to cover all the same data that the single API had provided access to previously, due to it now being found in different data tables at Elia. Instead, we wanted to provide value to our users by simplifying the API calls so that the data from multiple data tables would be consolidated back into one single endpoint without having to take those different data tables into account on their end. One of our developers updated the API definition and programmed our gateway to pull the requested data from various Elia data tables by configuring a branching redirect to the various tables dependent on the date given as a parameter. This means that data from various data tables is still available on re.alto via one single connection, keeping it simple for our users. Thanks to re.alto’s development team, our users can just continue to specify the required date/time frame and they will see all of the expected data as before. The simplicity of calling these APIs has therefore been maintained for our marketplace users, meaning the changes from a user perspective are now minimal.
If you’d like to learn more about our API Marketplace or our IoT connectivity solutions, please reach out via the contact us page on our website.
(The Imbalance Prices API is now split into six via Elia.)
Image source: Elia
Thank you
You have been subscribed for the newsletterNew Feature: Guided Onboarding of EVs
Energy APIs
This article looks at the benefits of one of our newer features: the guided onboarding of EVs to the re.alto platform.
14.10.2024
Electric Vehicles/IoT Connectivity

The guided onboarding process is a graphical interface where the user (in this case the driver) can trigger the onboarding of their vehicle onto the re.alto platform by following a link and then entering their own vehicle data, as opposed to our client having to set this up for each individual user/vehicle first. Our guided onboarding feature enables users to onboard their device themselves through a web-based UI created by re.alto. This allows re.alto to interact directly with those users, greatly minimising the integration effort for our client.
For a regular onboarding session, the client would need to know each user’s VIN number (the vehicle’s unique identification number) and car brand to create a classic onboarding request. This can delay the onboarding process for the client as they would need to collect the user-specific VIN numbers and car brands before each onboarding session could begin. The benefit of a guided onboarding session, in comparison, is that the session can be instigated without prior knowledge of the vehicle brand or VIN number. The guided onboarding session simply creates a guided onboarding URL with a unique code. Clicking on the URL begins the onboarding journey by directing the user to re.alto’s web UI screens for guiding users in onboarding their car, where they can then fill in the required information about their vehicle themselves and consent to sharing their data. The VIN/brand combination is verified by re.alto and the connection is confirmed. The integration effort and admin work is minimised for the client, as each individual user can complete the onboarding process for their own vehicle. The session can be started without the client needing to provide each individual VIN number and car manufacturer upfront. Since implementing this feature, it has become faster and easier for people using our platform to onboard cars. While a client/admin would previously have had to collect each user’s vehicle-specific information before being able to begin this process, the process is now automated for them in the guided onboarding session.
To explain the feature more clearly, here is an example: Client 1 makes a website or application for their users (in this example, their employees) to interact with. The functionality behind this website/app calls re.alto’s APIs (which we use to collect data from a vehicle, such as the state of charge and location for our client). In the classic onboarding version, however, the re.alto platform could not be called for the onboarding prior to the client inputting both the VIN number and car brand of the user. Hence, the client would first need to collect this information from each of their users through their own application before being able to trigger the onboarding sessions through re.alto. Instead of creating an onboarding request in this way, the client can now simply create a guided onboarding session which can be accessed via a unique URL containing a secure access code. The client can share this URL with their users directly or redirect them to it from their own application. The client can easily create multiple onboarding requests and send these to multiple users at the same time. Users are then redirected to UIs of re.alto where they are informed that the client’s company wants to connect to their vehicle. They can then input their own vehicle data, give their consent and trigger the onboarding flow themselves, saving the client time and ensuring a smoother, more professional journey for the user. The first screen focuses on consent, the second requests the vehicle information and the final one is the verification and confirmation stage.
Guided onboarding simplifies the onboarding journey and integration for our clients by enabling re.alto to interact directly with the client’s users. Whereas the previous system on the platform required the client or admin to know each individual user’s VIN and car brand upfront, a guided onboarding session makes it possible to begin the onboarding process before inputting any of this information. In addition, the user must now confirm that they give their consent to sharing their vehicle data, something that was not clearly captured in the classic onboarding process, leaving the prior consent capturing to the client. This added consent tracking and management ensures a securer and more professional process for all.
Guided onboarding is currently only available for electric vehicles, but additional types of devices will be added in future. More information on guided onboarding can be found in the re.alto readme or by contacting us.
Thank you
You have been subscribed for the newsletterReal-Time Syncing of API Documentation to ReadMe.io
Energy APIs
This guide shows you how to sync API documentation to readme.io in real-time.
01.08.2024
Development

For an API to be successful, it needs to meet certain quality criteria, such as:
- intuitiveness, developer friendly, focused towards easy adoption
- stability and backward compatibility
- security
- very well documented with up to date documentation
Focusing on the last one raises a few questions:
- How do we document?
- How do we make the documentation public?
- How can we keep the API and the documentation in sync in real-time with minimal maintenance effort?
How do we document?
The sky is the limit here, but on the other hand there is also a clear standard to describe APIs which is OpenAPI Specifications.
For our services, the spec is automatically generated from the C# XML doc comments.
This will output a JSON or YAML standardised specification that can be interpreted by any other tool higher up the stack that supports the same standard. The ubiquitous swagger UI is the most well known example.
That is great for local development and internal access, but once the API goes live we need something publicly accessible by any consumer or by our partners.
How do we make it public?
This is usually done by uploading the specs in some kind of developer portal that supports OpenAPI Specs and offers support features like partner login, public access, high availability, user friendly UI, playground to try the API etc.
While there are a few examples and solutions, we will focus on ReadMe.io which basically takes as input an OpenAPI spec and layers a nice UI on top of it.
Keeping it all in sync:
The goal is, once a developer makes a change to the API, as soon as the code is deployed to production and ready to be consumed, the documentation magically updates at the same time without additional work. No copying and pasting of text, no manual editing, no updating of wiki pages etc.
The immediate go-to solution is to automate this process in the CI/CD pipeline.
What we need:
- tooling to push the file in readme.io – this is provided by readme
- an API spec file (json/yaml) – as input to the readme CLI
- automation in the CD Azure pipeline – to call the CLI whenever we have a code update
Here are the Azure pipeline tasks that do the trick:
- task: CmdLine@2
displayName: 'Install readme.io CLI'
inputs:
script: 'npm install rdme@latest -g'
- task: CmdLine@2
displayName: 'Update OpenApi spec in readme.io'
inputs:
script: 'rdme openapi https://$(PUBLICLY_ACCESSIBLE_HOST)/swagger/v1/swagger.json \
--key=$(RDME_KEY) --id=$(RDME_SPEC_ID)'
Copy code
There are a few important points here that can be solved in other ways depending on your setup.
The CLI is run from the context of an Azure pipeline, so the pipeline needs a way to reach the swagger.json file. Either you somehow:
- put it in an DevOps artifact and use it as a local file or
- just take it from a deployment location that is publicly accessible from the CI/CD pipeline
The last two arguments are the access key to readme.io and the API specification ID in readme.io, they can be easily obtained from the readme.io admin panel. If you have multiple APIs, you will have multiple different IDs. They are stored as locked variables in the Azure DevOps library.
Private infrastructure:
You might run into the case where your API deploys inside some infrastructure where Azure DevOps pipelines do not have access. This was our case:
We built our custom API gateway using YARP (which isn’t very compatible with swagger).
You can work around such a problem by exposing the swagger endpoint in the API gateway.
If you have multiple services, you expose them on different endpoints in the API gateway. YARP requires unique URLs for different endpoints.
For example:
- [
https://$](<https://$/>)(API_GATEWAY_CUSTOM_DOMAIN_HOSTNAME)/swagger/myAPI1/v1/swagger.json
- [
https://$](<https://$/>)(API_GATEWAY_CUSTOM_DOMAIN_HOSTNAME)/swagger/myAPI2/v1/swagger.json
Then implement a URL transformer in YARP to rewrite the URL to /swagger/v1/swagger.json
just before forwarding the requests to the private services.
This is enough for most scenarios, but there is yet another special case that can occur on top of everything else
Special scenario:
In case you have multiple APIs, you might keep the input/output models of that API in a separate nuget package that is referenced by all APIs.
In this scenario, the generated OpenAPI Spec will miss all the XML documentation that is part of the external nuget.
All the underlined lines will be missing by default!
The problem is that there is a bug in the .NET project system that prevents from copying the existing XML doc file located in the nuget package in the output folder of the API.
A few seconds later….

How to fix it:
- A bit of manual intervention is required inside the .csproj file:
Copy code
This will iterate over all dll files and will try to copy the respective XML files to the output directory, if the file exists and if it matches the name of our models’ nuget. Otherwise a lot more XML will be copied over and will needlessly increase the deployment size.
This will work beautifully locally, but it will not work in a docker environment where the image is built by Azure pipelines.
- To make it work from the pipelines, where you will likely use
dotnet publish
command, a new line is to be added after the initial copy operation.
Copy code
The only difference will be the destination folder, instead of the output path it will be the publish directory.
- The last touch is to add the
NUGET_XMLDOC_MODE
environment variable to instruct the restore operation to unpack the XML docs from the nuget packages together with the DLLs. This is set by default toskip
to shave a few seconds off the pipeline run time.
ENV NUGET_XMLDOC_MODE=none
RUN dotnet restore "myApiProject.csproj"
RUN dotnet build "myApiProject.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "myApiProject.csproj" -c Release -o /app/publish
Copy code
Wrap up:
To ensure the success of your API, don’t neglect the documentation side of it. Never rely on manual updates of wiki pages, Confluence pages or even using the UI editors that some tools and services put at your disposal.
Always strive to automate and correlate the API code deployment with updated docs deployment.
Depending on each particular case, the system and technology stack used and the tactics used, the hoops you might need to jump through to achieve this can vary – but the strategic goal should be the same.
Generating Stellar OpenAPI Specs
Energy APIs
This guide by our development team shows you how to generate stellar OpenAPI specs.
23.07.2024
Development

Nowadays, OpenAPI Specifications are the de facto standard for describing and documenting APIs, enabling easy interoperability between API providers and API clients and consumers.
Producing a high quality OpenAPI spec will have a significant impact on the success of your API as it is the basis for a lot of tools that sit higher in the stack like:
- swagger UI
- developer portals
- autogenerated API clients (language agnostic)
In the .NET world, the most common way to generate an API spec is by using swagger. When properly configured, swagger supports versioning and automatic documentation based on the autogenerated XML docs.
Hence the better and more complete the XML docs, the better the resulting documentation and specification of the API.
In the next sections, we will be putting it all together from a 0 to a hero API spec that is:
- versioned
- serves multiple audiences
- well documented
- automation ready
Enable Versioning
The first step is fairly simple. We enable versioning and configure swagger:
public static IServiceCollection AddVersioning(this IServiceCollection services)
{
services.AddApiVersioning(options =>
{
options.DefaultApiVersion = new ApiVersion(0, 0);
options.AssumeDefaultVersionWhenUnspecified = true;
options.ReportApiVersions = true;
})
.AddApiExplorer(options =>
{
options.GroupNameFormat = "'v'VVV";
options.SubstituteApiVersionInUrl = true;
});
return services;
}
Copy code
The configuration part is as generic as possible to be reusable in multiple different APIs:
public static IServiceCollection AddSwagger(this IServiceCollection services,
string apiTitle, string apiDescription)
{
services.AddSingleton(new SwaggerApiMetadata(apiTitle, apiDescription));
services.ConfigureOptions();
return services.AddSwaggerGen(c: SwaggerGenOptions =>
{
//* ... */
var xmlDocumentationFilePaths = Directory.GetFiles(AppContext.BaseDirectory,
"*.xml", SearchOption.TopDirectoryOnly).ToList();
foreach (var fileName in xmlDocumentationFilePaths)
{
var xmlFilePath = Path.Combine(AppContext.BaseDirectory, fileName);
if (File.Exists(xmlFilePath))
{
c.IncludeXmlComments(xmlFilePath, includeControllerXmlComments: true);
}
}
});
}
Copy code
The above are used like this from an API project:
builder.Services.AddVersioning();
builder.Services.AddSwagger("Connect Readings API",
"Provides endpoints to get entity readings.");
Copy code
Now let’s unpack what is going on.
- In lines 4 and 5, the
SwaggerApiMetadata
record is needed to be able to encapsulate and then inject the API properties specified at compile time (the title and description) in theSwaggerGenOptions
configurator at runtime. - Just before the code in line 7 will execute, the
ConfigureSwaggerOptions
configurator will run and will set the metadata for each version defined in the APIs. The configurator looks something like this:
class ConfigureSwaggerOptions(IApiVersionDescriptionProvider provider,
SwaggerApiMetadata apiMetadata) : IConfigureOptions
{
public void Configure(SwaggerGenOptions options)
{
foreach (var versionDescription in provider.ApiVersionDescriptions)
{
options.SwaggerDoc(versionDescription.GroupName,
new OpenApiInfo
{
Title = apiMetadata.Title,
Version = versionDescription.ApiVersion.ToString(),
Description = apiMetadata.Description
});
}
}
//shortened for brevity
}
Copy code
Line 11 to 21 will pick up on all XML doc files that are found and loaded into swagger.
Line 9 is a placeholder for features detailed in the next section.
Customising Swagger Generation
There are times when you want to make programmatic changes to the way the final OpenAPI spec json file is generated. For example, exposing in the OpenAPI spec an HTTP header that is used by all endpoints, which in turn will show up on the UI to be easily filled out by the user.
For this, we can use swagger filters. Think of filters like .NET middleware but for swagger.
services.AddSwaggerGen(c: SwaggerGenOptions =>
{
c.DocumentFilter();
c.DocumentFilter();
c.OperationFilter();
// ...
}
Copy code
Document filter applies to the whole document while the operation filters are applied for each endpoint path.
Everything so far can be nicely packaged in a shared nuget library to be reused across multiple API services.
Multiple API Audiences
It could often be the case where you might have in the same API service both public endpoints that are exposed to partners via a gateway and private, internal ones that are only used by your team to carry out admin tasks or what not.
The internal endpoints will be part of the same logical API version, ex: v1, v2, v3 etc. but should be hidden from the public spec that the partners really see, and on the other hand still be visible in the internal swagger UI where your team (developers) can have access.
Considering the infrastructure pieces from previous sections are in place there are only two things left to do:
- define internal versions of the API
[ApiController]
[ApiVersion("1.0")]
[ApiVersion("1.0-internal")]
[Produces("application/json")]
[Route("v{version:apiVersion}/entities")]
public sealed class MyController : ApiController
{
...
}
Copy code
- map endpoints to the either public, private or both type of versions
[HttpGet("last")]
[MapToApiVersion("1.0")]
[MapToApiVersion("1.0-internal")]
[ProducesResponseType(typeof(MyResponse), 200)]
public async Task> GetResponseAsync(...)
{
...
}
Copy code
Strictly speaking there are two asp.net versions defined but only one logical version (1.0). If you leave out one version, that endpoint will simply not appear in the generated spec for that asp.net version.
This will generate a swagger UI that will allow us to pick which definition to use:
Fine Tuning
Usually you will need to fine tune the specs based on the audience. For example, if we want to expose the spec in a developer portal that will probably have a different host server than the localhost version in swagger UI. The ideal place to make these conditional tweaks is in swagger filters.
For instance, the AddServerFilter
from the previous section can look something like this:
public class AddServerFilter : IDocumentFilter
{
public void Apply(OpenApiDocument swaggerDoc, DocumentFilterContext context)
{
var url = "https://platform.domain.com/api"; //
if (!context.DocumentName.Contains("-internal"))
{
swaggerDoc.Servers.Add(new OpenApiServer { Url = url });
}
}
}
Copy code
This will add the following attribute in the resulting OpenAPI spec, that will be later picked up by developer portal tooling and UI.
"servers": [
{
"url": "https://platform.domain.com/api"
}
],
Copy code
Mapping XML Docs to Swagger UI
The quality of the resulting OpenAPI spec is determined from the quality on the XML docs, attributes and signature and data types of the API endpoints. The more complete they are, the easier it is to use swagger UI or any other similar tool based on OpenAPI specifications.
Take the following code as a source:
///
/// Returns the last reading
///
/// The id of the entity.
/// The type of the reading data.
/// A that represents the asynchronous operation.
/// Returns the most recent normalized reading for an entity.
[HttpGet("last")]
[MapToApiVersion("1.0")]
[MapToApiVersion("1.0-internal")]
[ProducesResponseType(typeof(Connect.Api.Models.Reading.V1.Reading), 200)]
public async Task> GetLastReadingAsync([BindRequired] [FromRoute] Guid entityId,
[FromQuery] string? type = "electricity")
Copy code
This will generate the following swagger UI. Notice the entityId example and default value for type. They are both pre populated and ready to use. Notice the remarks section which is totally optional and not added by default. Once present, it will give the API endpoint a more detailed description.
Other tools provide similar capabilities and will offer the clients of the API a greatly improved experience.
Using the same XML docs in the models that are used as part of the endpoint interface, we obtain a nicely documented schema. Notice all the descriptions in the response schema below:
Next Steps
At this point, we have a well documented spec with multiple versions for both internal and public clients that is fine-tuned to work locally but also for publishing to a publicly accessible location from where partners can try out our API.
The next step will be to put in place some automation that will do this publishing automatically and keep it in sync with the code throughout the lifetime of the API.
Coming up: Our next technical article will look at real time syncing of API documentation to Readme.io.
Scaling with Azure Container Apps & Apache Kafka
Energy APIs
This article explains how to scale using Container Apps and Apache Kafka.
11.06.2024
Development

This article, written by re.alto’s development team, explains how to scale with Azure Container Apps and Apache Kafka. While such documentation already exists for use with Microsoft products, our development team did not find any similar documentation on how to scale containers using Apache Kafka and Azure Container Apps. We have since figured out how to do this ourselves and want to share our knowledge with other developers. The article below is intended to act as a guide for other developers looking to do something similar.
(For an introduction to this topic, please see our previous article on containerisation and container apps here.)
To allow containers, specifically replicas of a container, to be scaled up and down depending on the number of messages flowing through an Apache Kafka topic, it is possible to set up scaling rules for the container. But first, we need to create a container image that can consume messages from an Apache Kafka topic.
Consume message from an Apache Kafka topic:
To consume messages from an Apache Kafka topic, various solutions are already available. Since we have chosen to host our Apache Kafka cluster with Confluent, we have also decided to use their nuget package. You can find it on the nuget.org feed under the name Confluent.Kafka.
When you install this nuget package, you can use the ConsumerBuilder class to create a new consumer and subscribe to a topic.
Now all that is left is to consume the messages.
Now you have the code to consume messages from an Apache Kafka topic. If you would like more information, you can have a look at the documentation provided by Confluent at https://docs.confluent.io/kafka-clients/dotnet/current/overview.html. Now we build a container image with this code inside, register it at our container registry and create a container app based on the image – but how do we tell Azure Container Apps how to scale this container?
Scaling Azure Container Apps:
Azure Container Apps uses a KEDA scaler to handle the scaling of any container. You will not have to configure the KEDA scaler yourself, but you will have to tell the container which scaling rules to use. You will find a lot of examples on the Microsoft documentation pages on how to scale based on HTTP requests or messages in an Azure Service Bus. However, if you would like to know how to configure the scaling rules for an Apache Kafka topic, you may find yourself out of luck with the available documentation. But we explored and managed to do it like this:
You will need to set up a custom scaling rule for your container. We are using bicep to deploy our containers, therefore the examples shown below will be in bicep, but they will translate easily to any other way of deploying to Azure. To understand bicep or have a look how to create a bicep file for an Azure Container App have a look here: https://learn.microsoft.com/en-us/azure/templates/microsoft.app/containerapps?pivots=deployment-language-bicep.
We are going to focus on the scaling part of the container, so to begin with, you need to configure the minimum and maximum number of replicas that the container can scale between.
In the example above, we have set the minimum number of replicas running to 0, which means it will scale down to zero replicas running when there are no messages to consume from the Apache Kafka topic. This means that we can save on costs, as well as freeing up those resources for something else.
The maximum number of replicas is set to 6. You can go as high as you like, but going above the number of partitions in the Apache Kafka topic would be pointless, since any replica above the number of partitions will not be able to consume messages. So this number should be set to any number that is less than or equal to the number of partitions of the Apace Kafka topic.
Now let’s add the rule.
The name of the rule can be anything you like; the name chosen here is just an example. After configuring the name, we need to define the rule. In our case, it is a custom rule. The first thing that the custom rule needs to know is the type of the rule. This translates to the KEDA scaler that is going to be used for the rule. We want to use the Apache Kafka KEDA scaler, but any available KEDA scaler can be used here.
Next comes the metadata, which means we need to provide the bootstrap servers, the consumer group and the topic that was used in the code sample to inform the KEDA scaler which Apache Kafka topic to listen to and determine whether scaling up or down is needed.
And as a final step, we need to allow the KEDA scaler to access the Apache Kafka topic. We have created a few secrets to store the required information (please look at the Microsoft documentation regarding deploying Azure Container Apps mentioned above for more information). We will have to provide the bootstrap servers, the username and password and the type of authentication to the KEDA scaler to allow it to connect to the topic. Once this is all configured and the container is deployed using the bicep module, you have your Azure Container App configured to scale based on the number of messages in the Apache Kafka topic.
Thank you
You have been subscribed for the newsletterContainerisation: Introduction & Use Case
Energy APIs
An introduction to the technologies used in containerisation (by re.alto’s dev team)
11.04.2024
Development

Containerisation: An Introduction to Technologies Used in Containerisation
& a re.alto Use Case Example
In this article, we want to focus on the more technical side of re.alto and are going to highlight some of the technologies currently used by our developers when it comes to container apps.
What is a container/containerisation?
Containerisation is the bundling of software code with all its necessary components into one single, easily portable package (a “container”). It allows software developers to deploy and scale applications more efficiently and removes the need for running a full operating system for individual applications.
Docker, an open-source platform for developing software in containers (applications and any supporting components), enables developers to separate their applications from their infrastructure. Developers write code to create a container image, this image is then deployed to a container image repository. A container is based on a container image – when the container starts up, it will pull the image and execute the code inside it. Depending on the number of requests (ie: calls to an API), a container can have zero to a lot of replicas (instances of a container) running simultaneously, in which case an orchestration tool such as Kubernetes is often required to handle this.
Why Kubernetes?
Kubernetes is an open-source platform that manages containerised applications and services as clusters. Kubernetes offers a framework for software developers to run distributed systems efficiently, handling the scaling of applications and providing other useful features, such as load balancing and self-repair of containers. However, it can be complicated to manage your own Kubernetes cluster without prior experience of doing so. Most developers lack interest and knowledge in this area as it is expertise that falls more within an infrastructural role in a company – and many start-ups have not yet scaled to the size where this hire is necessary. That is where Microsoft Azure Container apps comes in.
Game changer: Microsoft Azure Container Apps
Like many smaller companies (and especially start-ups), re.alto does not have the time or resources to manage our own Kubernetes cluster, so we decided to use Azure Container Apps, an orchestration service introduced by Microsoft in 2022 for deploying containerised applications. This service really is a game-changer for start-ups, as it greatly simplifies the management of a Kubernetes cluster. Developers can still create containers but no longer face the hassle of configuring and maintaining their own cluster, as Microsoft does all this for them. With Microsoft’s managed environment taking over the orchestration of their cluster, developers can focus more on the actual container apps they want to build.
Our use case: combining technologies
How we use containerisation (Azure Container Apps) and Apache Kafka to get data from a smart meter dongle and stream it elsewhere.
One use case, for example, focuses on the flow of our Xenn P1 dongles. The image above shows the stages of this flow. In this case, the IoT device – our Xenn dongle (attached to a smart meter for use with re.alto’s Xenn app) – pushes a message to an MQTT broker. A container consumes this message. The raw message is then distributed by Apache Kafka. Kafka has the ability to listen to millions of messages per second and has the benefit of storing messages for a certain period of time (in our case, seven days). That means if the connection to one container is temporarily lost, or if we stop and restart it, it will still read and process any messages from that period once it is back online – meaning no messages/commands are missed or lost. Kafka keeps track of which consumer groups (or container apps) have read which messages and, although we don’t do this currently, it is also possible to set up a schema for an Apache Kafka topic which only allows you to push messages in a specific format into a topic, guaranteeing that each message is of a specific format/standard. Using Kafka is greatly beneficial to companies like re.alto as we have far too many data streams to manage all of them ourselves. The message is then picked up by another container app and is stored in our data storage. While all containers receive the same message, they can be programmed to follow different instructions. This enables us to have containers with individual responsibilities, such as a container for tracking peak detection. In this case, the container would extract the relevant parts of the message and compare the readings with all other readings in the same quarter-of-an-hour to determine whether the consumer will run into a peak. If this is the case, the consumer receives a push notification in the Xenn app.
Azure Container Apps provides start-ups with an invaluable service in the management of Kubernetes clusters, while Kafka simplifies handling large amounts of data streams and ensures no messages from IoT devices get lost or go unread.
In our next article, we’ll be looking in more technical detail at how to scale with Kafka and Azure Container Apps and will provide you with some of the code needed to do so.
Thank you
You have been subscribed for the newsletterRemote EV Charging via Official APIs
Energy APIs
Remote Charging via Official APIs: the Mercedes Benz / Tesla Connector
14.02.2024
Electric Vehicles/IoT Connectivity

re.alto has been testing the official APIs from Mercedes-Benz and Tesla and our development team is satisfied with the response from both so far. The new APIs enable near real-time monitoring with a reading every five minutes and offer access to interesting data. This connectivity provides a lot of potential and opportunities when it comes to smart charging and smarter energy management – no smart charge pole is required and using an official or native API means the data obtained is reliable.
Back in November, we published an article on the new EU Data Act, highlighting that the new regulations ultimately mean that OEMS/manufacturers in the European Union must make the data of their appliances available to the user for free in a machine-readable format (ie: an application programming interface to extract or share data). Manufacturers therefore need to build interfaces to give consumers (or companies) the opportunity to download or read this data. Some OEMs are ahead of the game, with car manufacturers Mercedes-Benz and Tesla offering access and already making a remote control function available over their official APIs.
A remote connection to the electric vehicle is also important for charging-related use cases. Most people want the comfort to charge their EV at home, yet an EV adds a significant peak load to the household installation. Load balancing, optimised solar consumption, dynamic rate charging: most of these features require the installation of a smart charge pole. A smart charge pole easily mounts to 1000€ above a regular one. That is where remote charging can be a game changer. Remote charging can help these consumers save money while increasing comfort in use cases such as smart charging or obtaining data from the car to help their employer reimburse their transport expenses.
The APIs will also allow us to control EV charging to a certain degree with the aim to be able to stop and start charging the vehicle on command. This opens the potential to optimally schedule the charging of the EV, so that the consumer is only consuming energy at the time when it is most cost-efficient to do so. Our developers tested this function and saw that, in most cases, the vehicle responds to the command to stop or start charging in less than a minute. This is certainly impressive and will enable interesting new use cases as a result, especially for those with a dynamic energy tariff and those harnessing solar power – however, we will delve into these use cases in more detail soon.
We expect it to be a major game changer that car manufacturers are now enabling this remote control function via an official API, and with the EU Data Act demanding that users be given access to data, it won’t be long until more OEMs follow suit and make data available via official channels.
Thank you
You have been subscribed for the newsletterThe EU Data Act
Energy APIs
This article looks at the EU Data Act and what it means for OEMs.
30.11.2023
News/Energy APIs

What is the EU Data Act?
The EU Data Act (regulation on harmonised rules on fair access to and use of data), proposed by the European Commission in February 2022, will play a significant role in Europe’s digital transformation going forward. The Data Act has now been adopted and is expected to be published in the next few days. As an EU Regulation, the provisions of the Data Act are binding and directly applicable in all Members States and will apply from 20 months from the date of entry into force.
The Data Act will provide a framework for data access and data sharing and aims to make more data available for companies and consumers, and to ensure fairness regarding the distribution and use of this data. According to the European Commission, the main objective of the Data Act is “to make Europe a leader in the data economy by harnessing the potential of the ever-increasing amount of industrial data, in order to benefit the European economy and society”. The Commission states that “the strategy for data focuses on putting people first in developing technology and defending and promoting European values and rights in the digital world” and emphasises that the Data Act is “a key pillar of the European strategy for data”.
An essential part of this act for the average citizen is regarding the data generated by Internet of Things (IoT) devices, such as electric vehicles or smart home devices. IoT appliances are smart devices that can connect to the internet and independently communicate in real time with other devices or apps within the IoT network. When someone purchases an item from a store, they become the legal owner of that physical item. The situation with digital data from connected devices and who owns or uses it, however, has always been more complicated, and the new act aims to create clarity here.
- The new Data Act (Art. 3(1)) mandates that all connected products should be designed and manufactured in such a manner that the product data, including the relevant metadata, is, where relevant and technically feasible, by default directly accessible to the user easily, securely and free of charge in a comprehensive, structured, commonly used and machine-readable format – (i.e. not only accessible to the owner, but also to the one leasing the product, for instance). This particular obligation will apply from 32 months after the date of entry into force.
- It also stipulates (in Art. 4) that where data cannot be accessed directly by the user of the connected product or related service, data holders should make accessible the data to the user, free of charge and in real-time. This means that, after the date of application (~last quarter of 2025), the manufacturers of these devices must provide users with free access to the data produced by those devices.
- In addition, upon request by the user (or by a party acting on behalf of the user), that data should also be made available to third parties and such a request should be free of charge to the user (see Art. 5). The act also specifies the obligations of third parties receiving the data at the request of the user, e.g. they can only use it for the purposes and conditions agreed with the user and subject to relevant EU law on data protection (see Art. 6). However, making data available to third parties (data recipient under Art. 5) must not necessarily be for free, and the act provides conditions as well as rules regarding compensation (Art. 8 and 9).
Finally, while specific obligations for making available data in Union legal acts that entered into force on or before the date of entry into force of the Data Act will remain unaffected, these harmonised rules should impact the update of existing or new Union sector legislation.
What other enabling EU framework is out there?
In the meantime, recent EU legislation is already paving the way towards accessing and sharing of data from connected devices.
For instance, the recently revised Renewable Energy Directive (RED, Directive (EU) 2023/2413) not only mandates a Union target for 2030 of at least 42,5% of share from renewables in the gross final energy consumption. It also asks Member States to (Art. 20a(3)):
- ensure that manufacturers of domestic and industrial batteries enable real-time access to basic battery management system information to battery owners and users and third parties acting on their behalf.
- adopt measures to require that vehicle manufacturers make available in real-time in-vehicle data to EV owners and users as well as third parties acting on their behalf.
We can find similar provisions for instance in the proposed revision of the Energy Performance of Buildings Directive (EPBD), asking Member States (Art. 14) to ensure that building owners, tenants and managers can have direct access to their buildings system’s data (inc. data from building automation and control systems, meters and charging points for e-mobility).
The recently proposed reform of the Electricity Market Design also asks Member States (Art. 7b) to allow transmission system operators and distribution system operators to use data from dedicated metering devices (submeters or embedded meters) for the observability and settlement of demand response and flexibility services.
Last but not least, as part of the Action Plan on the Digitalisation of the Energy System, there is a focus on the need to enable an EU framework for data access and sharing, namely via so-called EU energy data space(s). In that regard, the Commission announced the creation of an expert group (“Data for Energy” working group) that will support them in the definition of high-level use cases for data sharing (in particular for flexibility services for energy markets and grids, and smart and bi-directional charging for EVs) and in defining the governance of EU energy data space(s).
What does this mean for OEMs/manufacturers?
The new Data Act ultimately means that the European Union is going to impose upon OEMs/manufacturers to make the data of their appliances available to the user for free in a machine-readable format (ie: an application programming interface or API to extract or share data). To enable access to the data, manufacturers will therefore need to build interfaces to give consumers (or companies) the opportunity to download or read this data. Some OEMs, such as SMA Solar, BMW and Mercedes-Benz, are ahead of the game and have already been working on building this infrastructure over the past year or two. Others, however, have not yet dedicated resources to implementing this and will need to follow suit in the year to come. With the last quarter of 2025 deadline set, the remaining OEMs will find themselves under pressure to switch their focus to ensure they are compliant with the new legislation on time.
How can re.alto help those requiring access to this data?
re.alto works with IoT connectivity and acts as a connector between OEMs and third parties. While there was previously a question of whether OEMS would choose to offer access to this data, it is now being dictated by legislation, and their compliance is therefore mandatory. That ultimately means that the IoT technology is emerging, and each OEM will have to make their data machine-readable and create a suitable interface to share this data by 2025 at the latest. But while compliance in ensuring data is machine-readable is compulsory, the EU has not imposed a standard by which all OEMs must comply when implementing this. That means that each OEM will create their own kind of interface, with the API for each device or brand potentially differing greatly from the next. The result will be a jungle of different interfaces/APIs to integrate with, making it incredibly complicated for third parties to access the various data they require when building their own energy-as-a-service products.
That is where re.alto comes in. This recent evolution in EU legislation supports our vision and aligns with the services and solutions we are offering our customers. If you are building energy-as-a-service products or applications and want to be able to access energy data from various OEMs or devices, we can give you access via a single, standardised API. re.alto can create a path through this jungle of APIs, so you can use one single interface to communicate with them all. Whether you want to add electric vehicles or heat pumps to your solution, we can act as a standard interface for all of the energy-related transactions and connections, thus simplifying access to energy data for third party use.
Conclusion
The new EU Data Act, as well as other recent pieces of EU legislation, is shaking up IoT connectivity and putting pressure on manufacturers/OEMs to make their data machine-readable and available to the public sector and ultimately the end consumer. Going forward, the strategies of OEMs will no longer play a role in whether they choose to make this data available – legislation now dictates that they must comply. While compliance is mandatory, the EU has not set any standard for the resulting infrastructure. This means that data will be available via many very different kinds of APIs and interfaces, resulting in connectivity being complicated. To simplify all of this for third party use, re.alto translates everything into one standard API connection, regardless of the kind of device or its brand.
If you are building energy-as-a-service apps or solutions and want to know more about how we can help you access the data you require in the simplest way possible, don’t hesitate to reach out to us!