**BM-2cWdaAUTrGZ21RzCpsReCk8n86ghu2oY3v**

Feb 19 03:47 [raw]

How much would it be worth if there were an algorithm that could compress deterministic or random data 90+ % ? For example let us assume we can compress 1KB down to 72 bytes. Who would pay for that technology?

**BM-2cWdaAUTrGZ21RzCpsReCk8n86ghu2oY3v**

Feb 19 03:59 [raw]

Let me guess. Can you also DEcompress the original data in more than 10% of the cases, or still working on that part? :)

**BM-2cWdaAUTrGZ21RzCpsReCk8n86ghu2oY3v**

Feb 19 04:21 [raw]

What have you achieved?

**BM-2cWdaAUTrGZ21RzCpsReCk8n86ghu2oY3v**

Feb 19 05:08 [raw]

I didn't say one way or the other. I said "let us assume." Then I asked who would pay for such a powerful technology.

**BM-2cWdaAUTrGZ21RzCpsReCk8n86ghu2oY3v**

Feb 19 06:34 [raw]

ANY IT business would pay for an algorithm that can compress truly random data to less than X% its original size AND decompress it back correctly more than X% of the time, using real-world computing hardware and practical timeframes (you could even market it as "unlimited compression", which would be technically correct). However, most novel compression proposals that I've seen fail one or more of these conditions. There are some information theory limitations that seem to keep getting in the way.

**BM-2cWdaAUTrGZ21RzCpsReCk8n86ghu2oY3v**

Feb 26 15:23 [raw]

This is in fact a contradiction within itself, probably due to the lack of properly understanding mathematics. If the data is perfectly random, any kind of compression will be impossible. So to answer the second question: No one would, at least no one with a sane mind.

**BM-2cWdaAUTrGZ21RzCpsReCk8n86ghu2oY3v**

Feb 26 16:14 [raw]

Random data compression is possible. Compressor 1 is able to compress 50% of any data, including some data which is random - but not all such data. Compressor 2 is able to compress 50% of any data, including some data which is random - but not all such data. Compressible data sets of these two compressors do not overlap. Farewell, "pigeonhole principle".

**BM-2cWdaAUTrGZ21RzCpsReCk8n86ghu2oY3v**

Feb 27 23:48 [raw]

> So to answer the second question: No one would, at least no one with a sane mind. You're a naysayer, not a doer, and your job is to discourage the doers. > If the data is perfectly random, any kind of compression will be impossible. Common wisdom once dictated that a human moving more than 50mph would disintegrate. In my study of history I learned that as visionary men were inventing the railroad naysayers were heckling the doers about how high-speed carriages would cause people to disintegrate from the rush of speed. > This is in fact a contradiction within itself, probably due to the lack of properly understanding mathematics. You do what all naysayers do: argue as if you do have a proper understanding. Where did you get your degree in mathematics? I placed top 1/10th percentile in my class. I did all the coursework for math in high school in six weeks and tested out: 97%. I never even set foot in a classroom. I think what you mean is I don't properly understand Wikipedia's version of mathematics. Or perhaps you mean I don't understand Stack Exchange's version of mathematics. While you call a thing insane I am slowly devising language and symbol sets for a new branch of mathematics to deal with resonance and periodicity in all noise. I have devised several compression algorithms which successfully compress random data. I am able to compress random integers and bit streams of hundreds of digits a minimum of 12.5% per pass. There is a cost: CPU. The algorithm searches for field patterns that can be reduced to a polymorphic algorithm, which takes a long time. A specially devised ASIC could probably do the compression 1000x faster. I'm looking in my terminal just now at the statistics of the last compression pass script with my newest test algorithm. It just compressed totally random bits 6.875%. I have repeatedly broken the pigeonhole principle with adaptive polymorphism that is able to map a smaller map to a larger structure. My best algorithm yet is 12.5% per pass minimum, and it compresses at least that on every pass not matter what. It is also the slowest. My work continues.

**BM-2cWdaAUTrGZ21RzCpsReCk8n86ghu2oY3v**

Feb 28 20:06 [raw]

You probably could not be more wrong with your assumptions. Have a nice day.

**BM-2cWdaAUTrGZ21RzCpsReCk8n86ghu2oY3v**

Mar 2 05:19 [raw]

I believe I made a breakthrough that allows arbitrary compression of any quantity to any target quantity with at least 2.5% marker bits to feed the expansion function. I successfully compressed a random bit stream from 32 kb down to a 280 byte algorithm that maps to the entire original data set. I have discovered properties in byte patterns that allows them to be graphed. Just as you can take a graphing polynomial equation to populate a huge field of data, you can take the huge field of data and wind it back into a graphing function. In other words, random doesn't really exist. Everything is structured. Even noise can be mapped to functions. It is fast enough that Python can do it without too much lag (a bit). C should do it at least 10 times faster. What this means is the possibility of sending an entire Linux distro DVD (approx. 1 GB) in a file 1/20th the size or smaller. With optimized machine code and buffering one should be able to turn a terabyte drive into 80+ terabytes of compressed data. Who do I sell to first? This is worth billions.

BM-2cWdaAUTrGZ21RzCpsReCk8n86ghu2oY3v