Skip to main content
Question

TrendMiner tag “update” overwrites history instead of appending – scalability limitation

  • February 12, 2026
  • 3 replies
  • 60 views

We are currently using the trendminer‑interface SDK  to create and populate custom tags (e.g. predictions / calculated KPIs). During implementation and testing, we have identified a major limitation that becomes critical at scale.

When using: 

client.io.tag.save(tag_data_dict, index=False)

to write data to an existing tag:

  • Saving a new dataset overwrites existing historical data for that tag.
  • It is not possible to append new time‑series data to an existing tag.
  • Writing data for a new time range clears previously uploaded history.

Although the documentation refers to this operation as an “update”, the actual behavior is a full overwrite of the time series, not an incremental append.

Why this is a problem?

Because appending is not supported, every  update would require:

  • Re‑uploading the entire historical dataset for each tag
  • Maintaining full history externally

This leads to:

  • Significant performance and bandwidth overhead
  • Long execution times
  • High operational risk (partial uploads or failures can wipe history)
  • A design that does not scale for near‑real‑time or historian‑like use cases

Questions to the comunity/Trendminer team

  1. Is there any supported way to append data to an existing tag time series (via SDK or API)?
  2. Is this overwrite behavior by design, or is append functionality planned?
  3. If append is intentionally not supported, what is the recommended scalable approach ?

3 replies

Forum|alt.badge.img
  • Employee
  • February 13, 2026

Hi ​@B.Mohammed ,

The endpoint/SDK functionality you are referring to is the Tag Builder CSV upload. By design, this functionality overwrites all previously stored data for that tag name. It is mainly intended for one-time data uploads — for example, data that is not available in a historian and where setting up a historian connection would be disproportionate to the effort.

TrendMiner does support querying new data through the available data source connections. However, at this moment there is no time series write endpoint available in TrendMiner.

If this is an important requirement for your use case, I would recommend submitting a product idea on the Community so our product team can further evaluate and consider it.

For a scalable and future-proof approach, we recommend storing data that is incrementally appended and needs to remain permanently available in a database first. TrendMiner can then query that database and expose the data as a tag.

I hope this clarifies things.

Kind regards,
Frederik


  • Author
  • Explorer
  • February 17, 2026

Hi ​@fvandael 

Thanks Frederik for your feedback 😀

I have a use case where writing KPI calculation results to context items will be not scalable and will reach some limitation on contexthub considering the factory in scope. it will be as well a bottleneck when connecting with PowerBI.

for me the best Architecture is to write the KPI results into a Formula/Custom Calculation tags, this way it will more scalable and connection through PBI will be using the last-value endpoint.

so your proposal is basically to store the calculation outside Trendminer, using a SQL DB (for example) and have Trendminer and PBI read from that DB?

Regards,

Mohammed.


Forum|alt.badge.img
  • Employee
  • February 20, 2026

Hi ​@B.Mohammed ,

 

For now, I recommend storing the calculation outside of TrendMiner. You’re also welcome to submit a product idea requesting a time-series write endpoint so our product team can evaluate and consider it further.

Kind regards
Frederik