Our vision about data is published in number of news of previous research center – Advanced Development Technics, envisioning the flights in space and what to expect carry on board – food no, water refurbished, air refurbished, clothes no, design dress no, haircut no…
Data compression is awful.
We understand data compression in space as a fixed size allocated for compressed data in memory,
- Execution memory
- Data Memory
- Stack Memory
- Interstellar Herhofer Data Compression
Each segment cover results to be defined delivering a tremendous amount of data stored in a memory segment of 32 bytes.
Interstellar Data Compression endure a few demands on the software crisis market:
- lack of memory
- lack of disk storage
- processor signature failure low benchmark
Present compressed data archives grow in size during the time. If an archive is created a week ago, 2 week later archive is grown in scale allocating more and more space
DEVELOP ARCHIVE OF COMPRESSED DATA THAT DO NOT CHANGE SIZE UNDER ANY CONDITIONS
We delivered fixed size archive, lets say 12 bytes of archived 8m TB, if we add another 8m TB to the archive the archive size is the same 12 bytes without growing in size even one bit.
Second issue with big data exposition is the data order, we delivered deterministic search of infinite unordered data arrays O(1) speed and complexity redefining computer science to deliver interstellar data management
Third scope is speed, how fast are modern processor units. Our quantum processor is immense speed in eons, not in FLIPFLOPS beyond the speed of light. The speed we deliver is a new processor type identified similar to modern GRID technologies, working name GRID Processor adding speed by processing fractals, not instructions.
Fractals, are modern machine code processor instruction, speedup search infinity, our algorithms to find data in unknowable.
Lorenz Herhofer, CEO
Herhofer Space Bank