Since the first days of ConSenses, we are strongly addicted to find best solutions for acquisition of technical data, data processing and presentation for daily usage in company structures. We developed the hourglass model in order to have a simple model that enables fast structuring of huge communication tasks.
For effective description of processes within production systems, it is mostly needed to acquire comprehensive data in cycle times even below a milli-second. The needs for precision concerning synchrony of multiple sensors within a group is typically even higher. MES- and ERP systems process data normally in way higher cycle times. On top, the processes data and needs for handling are complete differently structured. ConSenses systems contain all needed har- and software for solving this specific industrial IoT-task.
This is an example of a pure blanking process. This example shows, which content of stroke-data may be contained in such a standard process. Many standard monitoring systems or machine protection systems are not capable to a) measure this data and b) store and transfer this data for deeper analysis. Thus, this is a nice example, where needs of industrial IoT do not fit to "already there" data. The data was measured with PiezoBolts in connecting rods of a mechanical press. Based on these, insights the tech department of customer and machine supplier were able to change some tool and process details. This led to drastically reduction of damages, downtimes and costs. Important point is: Standard in pressshops is that a peak-value per stroke (100 values per minute) are ok. In fact, a cycle-time of 1000 / sec made transparent, what real bottlenecks are.
This is another example of a process in industrial application. Although this process is a rather slow deep drawing process, it contains some sub-information, which are rather fast. This processs is instructive, since it shows, that the Nyquist theorem needs to be taken seriously. We often see industrial applications, where a hand full of values per stroke (peak value or stuff like that) shall be used for IIoT tasks. In most cases this approach is foredoomed, because factors of successful production or risk of damages are most time hidden deeper in the data. With this video we want to emphasize this fact and give some overview how sophisticated data condensation can be done.
Quality of data is a major topic if serious decisions shall be made. In industrial IoT applications, the quality is always a matter, where interdisciplinary expertise is crucially needed. In many cases blind spots of experts lead to unwanted reduction of quality. Mostly seen effect is, that "anyway-there-data" shall be used, because costs seem to be attractive and data should be ok, because sometimes someone decided the data to be good. We saw in many cases that this approach can lead onto the wrong track, because existing data which is good for machine control or organizational tasks. Is not automatically good for technical decisions. For this reason, we recommend a systematic approach, that includes the interdisciplinary character of the task and allows methodical and comprehensible understanding of quality needs and losses.