Compress a signal by storing signal diff instead of actual samples - is there such a thing?Is there such a...
Test if tikzmark exists on same page
Risk of getting Chronic Wasting Disease (CWD) in the United States?
Smoothness of finite-dimensional functional calculus
LaTeX closing $ signs makes cursor jump
Is it tax fraud for an individual to declare non-taxable revenue as taxable income? (US tax laws)
What typically incentivizes a professor to change jobs to a lower ranking university?
Why was the small council so happy for Tyrion to become the Master of Coin?
Minkowski space
Arthur Somervell: 1000 Exercises - Meaning of this notation
can i play a electric guitar through a bass amp?
Which models of the Boeing 737 are still in production?
What's the point of deactivating Num Lock on login screens?
Finding angle with pure Geometry.
Why not use SQL instead of GraphQL?
Fencing style for blades that can attack from a distance
Collect Fourier series terms
How to find program name(s) of an installed package?
Has the BBC provided arguments for saying Brexit being cancelled is unlikely?
What does "Puller Prush Person" mean?
Problem of parity - Can we draw a closed path made up of 20 line segments...
What's the output of a record cartridge playing an out-of-speed record
Languages that we cannot (dis)prove to be Context-Free
Why did Neo believe he could trust the machine when he asked for peace?
Watching something be written to a file live with tail
Compress a signal by storing signal diff instead of actual samples - is there such a thing?
Is there such a thing as band-limited non-linear distortion?How to generate a good Sinus audio signal of specific SampleRate, DataBits and Chunnels in a time frameDerivative of signal with missing samplesShouldn't the Sampling Theorem imply that there should be no information loss at all after a signal is processed?Sampling Theorem: How to know the value between two samples of a SignalMethod of reconstructing a band-limited signal from discrete samplesHow can I sample a signal at 4 samples per cycle?How to find DC bias of periodic signal from samples that include a non-integer number of periods?Why use windowing function instead of truncating the signal to integer amount of periods?Reconstruction of a signal from non-uniform samples
$begingroup$
I am working with EMG signals sampled at 2kHz and 16 bits, and noticed that they "look smooth", that is, the signals are differentiable, and if I apply a "diff" function (numpy.diff
in my case) the magnitude of the values is considerably lower than the actual samples.
So I am considering to do something like:
- Split the signal into chunks of a given size;
- Foreach chunk, using variable length quantity (or similar), create a byte list and:
For the first sample of the chunk, add its absolute value;
For the remaining samples of the chunk, add their difference, relative to the previous value;
This way, the smoother the signal, and the closer it is to the baseline, the more I expect to decrease the byte-size of each chunk, by decreasing the individual byte-size of a large part of the samples.
Although I suspect this would improve things for me, I also suspect that this is nothing new, and perhaps it has a proper name, and even more elegant/efficient ways to implement it.
So the question is: what is the name of this compression technique, and what are its alternatives and/or variants?
discrete-signals digital-communications sampling compression
$endgroup$
add a comment |
$begingroup$
I am working with EMG signals sampled at 2kHz and 16 bits, and noticed that they "look smooth", that is, the signals are differentiable, and if I apply a "diff" function (numpy.diff
in my case) the magnitude of the values is considerably lower than the actual samples.
So I am considering to do something like:
- Split the signal into chunks of a given size;
- Foreach chunk, using variable length quantity (or similar), create a byte list and:
For the first sample of the chunk, add its absolute value;
For the remaining samples of the chunk, add their difference, relative to the previous value;
This way, the smoother the signal, and the closer it is to the baseline, the more I expect to decrease the byte-size of each chunk, by decreasing the individual byte-size of a large part of the samples.
Although I suspect this would improve things for me, I also suspect that this is nothing new, and perhaps it has a proper name, and even more elegant/efficient ways to implement it.
So the question is: what is the name of this compression technique, and what are its alternatives and/or variants?
discrete-signals digital-communications sampling compression
$endgroup$
2
$begingroup$
See en.wikipedia.org/wiki/…
$endgroup$
– MBaz
yesterday
$begingroup$
@MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now!
$endgroup$
– heltonbiker
yesterday
1
$begingroup$
BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering
$endgroup$
– leonbloy
7 hours ago
add a comment |
$begingroup$
I am working with EMG signals sampled at 2kHz and 16 bits, and noticed that they "look smooth", that is, the signals are differentiable, and if I apply a "diff" function (numpy.diff
in my case) the magnitude of the values is considerably lower than the actual samples.
So I am considering to do something like:
- Split the signal into chunks of a given size;
- Foreach chunk, using variable length quantity (or similar), create a byte list and:
For the first sample of the chunk, add its absolute value;
For the remaining samples of the chunk, add their difference, relative to the previous value;
This way, the smoother the signal, and the closer it is to the baseline, the more I expect to decrease the byte-size of each chunk, by decreasing the individual byte-size of a large part of the samples.
Although I suspect this would improve things for me, I also suspect that this is nothing new, and perhaps it has a proper name, and even more elegant/efficient ways to implement it.
So the question is: what is the name of this compression technique, and what are its alternatives and/or variants?
discrete-signals digital-communications sampling compression
$endgroup$
I am working with EMG signals sampled at 2kHz and 16 bits, and noticed that they "look smooth", that is, the signals are differentiable, and if I apply a "diff" function (numpy.diff
in my case) the magnitude of the values is considerably lower than the actual samples.
So I am considering to do something like:
- Split the signal into chunks of a given size;
- Foreach chunk, using variable length quantity (or similar), create a byte list and:
For the first sample of the chunk, add its absolute value;
For the remaining samples of the chunk, add their difference, relative to the previous value;
This way, the smoother the signal, and the closer it is to the baseline, the more I expect to decrease the byte-size of each chunk, by decreasing the individual byte-size of a large part of the samples.
Although I suspect this would improve things for me, I also suspect that this is nothing new, and perhaps it has a proper name, and even more elegant/efficient ways to implement it.
So the question is: what is the name of this compression technique, and what are its alternatives and/or variants?
discrete-signals digital-communications sampling compression
discrete-signals digital-communications sampling compression
asked yesterday
heltonbikerheltonbiker
640721
640721
2
$begingroup$
See en.wikipedia.org/wiki/…
$endgroup$
– MBaz
yesterday
$begingroup$
@MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now!
$endgroup$
– heltonbiker
yesterday
1
$begingroup$
BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering
$endgroup$
– leonbloy
7 hours ago
add a comment |
2
$begingroup$
See en.wikipedia.org/wiki/…
$endgroup$
– MBaz
yesterday
$begingroup$
@MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now!
$endgroup$
– heltonbiker
yesterday
1
$begingroup$
BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering
$endgroup$
– leonbloy
7 hours ago
2
2
$begingroup$
See en.wikipedia.org/wiki/…
$endgroup$
– MBaz
yesterday
$begingroup$
See en.wikipedia.org/wiki/…
$endgroup$
– MBaz
yesterday
$begingroup$
@MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now!
$endgroup$
– heltonbiker
yesterday
$begingroup$
@MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now!
$endgroup$
– heltonbiker
yesterday
1
1
$begingroup$
BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering
$endgroup$
– leonbloy
7 hours ago
$begingroup$
BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering
$endgroup$
– leonbloy
7 hours ago
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
You can also think of delta encoding as linear predictive coding (LPC) where only the prediction residual ($x[n]-hat{x}[n]$ in @robertbristow-johnson's notation) is stored and the predictor of the current sample is the previous sample. This is a fixed linear predictor (not with arbitrary coefficients optimized to data) that can exactly predict constant signals. Run the same linear predictive coding again on the residuals, and you have exactly predicted linear signals. Next round, quadratic signals. Or run a higher-order fixed predictor once to do the same.
Such fixed predictors are listed in Tony Robinson's SHORTEN technical report, yours in Eq. 4, and are also included in the FLAC lossless audio codec although not often used. Calculating the best prediction coefficients for each data block and storing them in a header of the compressed block results in better compression than the use of fixed predictors.
The linear predictor is supposed to do the whitening, making the residuals independent. In lossless compression, what is left to do is to entropy code the residuals, instead of using run-length or other symbol-based encoding that doesn't work so well on noisy signals. Typically, entropy coding assigns longer code words to large residuals, approximately minimizing the mean encoding length for an assumed distribution of the residual values. A variant of Rice coding compatible with signed numbers is typically used, as is done in FLAC, see source code of FLAC__bitwriter_write_rice_signed
. The Rice code has a distribution parameter that needs to be optimized for the data block and saved in the block header.
$endgroup$
$begingroup$
as similar to your suggestion, Subband ADPCM would possibly be the best choice...
$endgroup$
– Fat32
yesterday
add a comment |
$begingroup$
That's used a lot. See for example https://en.wikipedia.org/wiki/Delta_encoding, https://en.wikipedia.org/wiki/Run-length_encoding.
"Looking Smooth" typically means "not a lot of high frequency content". The easiest way to take advantage of this, is to figure out what the highest frequency really need then low-pass filter and choose an lower sample rate.
IF you signal has a non-flat spectrum, it's typically advantageous to "whiten" the signal, i.e. filter it so that the average spectrum is white, then encode, decode and filter with the inverse signal to recover the signal. This way you spend more bits on the high energy frequencies and less and the low energy ones. Your quantization noise follows the spectrum of the signal.
The scheme that you suggest is one of the simplest forms of this approach: your whitening filter is a differentiator and your inverse filter is an integrator.
$endgroup$
add a comment |
$begingroup$
Another notion you might wanna look into for lossless compression of a bandlimited signal (it's this bandlimiting that gets you this "smoother ... signal, ...closer ... to the baseline") is Linear Predictive Coding.
I think this is historically correct that LPC was first used as a variant of Delta coding where the LPC algorithm predicts $hat{x}[n]$ from the set of samples: $x[n-1], x[n-2], ... x[n-N]$. If the prediction is good, then the real $x[n]$ is not far off from the prediction $hat{x}[n]$ and you need store only the delta $x[n]-hat{x}[n]$ which is smaller in magnitude and a smaller word width might be sufficient. You would need to store the LPC coefficients for each block, but there are usually no more than a dozen or so of these.
This stored difference value can be compressed further using something like Huffman coding in which you would need to either store the "codebook" along with the compressed data or have some kinda codebook standardized so that both transmitter and receiver know it.
I think it's some combination of LPC and Huffman coding that is used by various lossless audio formats. Maybe there is some perceptual stuff used to, to get almost lossless compression.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "295"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdsp.stackexchange.com%2fquestions%2f56470%2fcompress-a-signal-by-storing-signal-diff-instead-of-actual-samples-is-there-su%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You can also think of delta encoding as linear predictive coding (LPC) where only the prediction residual ($x[n]-hat{x}[n]$ in @robertbristow-johnson's notation) is stored and the predictor of the current sample is the previous sample. This is a fixed linear predictor (not with arbitrary coefficients optimized to data) that can exactly predict constant signals. Run the same linear predictive coding again on the residuals, and you have exactly predicted linear signals. Next round, quadratic signals. Or run a higher-order fixed predictor once to do the same.
Such fixed predictors are listed in Tony Robinson's SHORTEN technical report, yours in Eq. 4, and are also included in the FLAC lossless audio codec although not often used. Calculating the best prediction coefficients for each data block and storing them in a header of the compressed block results in better compression than the use of fixed predictors.
The linear predictor is supposed to do the whitening, making the residuals independent. In lossless compression, what is left to do is to entropy code the residuals, instead of using run-length or other symbol-based encoding that doesn't work so well on noisy signals. Typically, entropy coding assigns longer code words to large residuals, approximately minimizing the mean encoding length for an assumed distribution of the residual values. A variant of Rice coding compatible with signed numbers is typically used, as is done in FLAC, see source code of FLAC__bitwriter_write_rice_signed
. The Rice code has a distribution parameter that needs to be optimized for the data block and saved in the block header.
$endgroup$
$begingroup$
as similar to your suggestion, Subband ADPCM would possibly be the best choice...
$endgroup$
– Fat32
yesterday
add a comment |
$begingroup$
You can also think of delta encoding as linear predictive coding (LPC) where only the prediction residual ($x[n]-hat{x}[n]$ in @robertbristow-johnson's notation) is stored and the predictor of the current sample is the previous sample. This is a fixed linear predictor (not with arbitrary coefficients optimized to data) that can exactly predict constant signals. Run the same linear predictive coding again on the residuals, and you have exactly predicted linear signals. Next round, quadratic signals. Or run a higher-order fixed predictor once to do the same.
Such fixed predictors are listed in Tony Robinson's SHORTEN technical report, yours in Eq. 4, and are also included in the FLAC lossless audio codec although not often used. Calculating the best prediction coefficients for each data block and storing them in a header of the compressed block results in better compression than the use of fixed predictors.
The linear predictor is supposed to do the whitening, making the residuals independent. In lossless compression, what is left to do is to entropy code the residuals, instead of using run-length or other symbol-based encoding that doesn't work so well on noisy signals. Typically, entropy coding assigns longer code words to large residuals, approximately minimizing the mean encoding length for an assumed distribution of the residual values. A variant of Rice coding compatible with signed numbers is typically used, as is done in FLAC, see source code of FLAC__bitwriter_write_rice_signed
. The Rice code has a distribution parameter that needs to be optimized for the data block and saved in the block header.
$endgroup$
$begingroup$
as similar to your suggestion, Subband ADPCM would possibly be the best choice...
$endgroup$
– Fat32
yesterday
add a comment |
$begingroup$
You can also think of delta encoding as linear predictive coding (LPC) where only the prediction residual ($x[n]-hat{x}[n]$ in @robertbristow-johnson's notation) is stored and the predictor of the current sample is the previous sample. This is a fixed linear predictor (not with arbitrary coefficients optimized to data) that can exactly predict constant signals. Run the same linear predictive coding again on the residuals, and you have exactly predicted linear signals. Next round, quadratic signals. Or run a higher-order fixed predictor once to do the same.
Such fixed predictors are listed in Tony Robinson's SHORTEN technical report, yours in Eq. 4, and are also included in the FLAC lossless audio codec although not often used. Calculating the best prediction coefficients for each data block and storing them in a header of the compressed block results in better compression than the use of fixed predictors.
The linear predictor is supposed to do the whitening, making the residuals independent. In lossless compression, what is left to do is to entropy code the residuals, instead of using run-length or other symbol-based encoding that doesn't work so well on noisy signals. Typically, entropy coding assigns longer code words to large residuals, approximately minimizing the mean encoding length for an assumed distribution of the residual values. A variant of Rice coding compatible with signed numbers is typically used, as is done in FLAC, see source code of FLAC__bitwriter_write_rice_signed
. The Rice code has a distribution parameter that needs to be optimized for the data block and saved in the block header.
$endgroup$
You can also think of delta encoding as linear predictive coding (LPC) where only the prediction residual ($x[n]-hat{x}[n]$ in @robertbristow-johnson's notation) is stored and the predictor of the current sample is the previous sample. This is a fixed linear predictor (not with arbitrary coefficients optimized to data) that can exactly predict constant signals. Run the same linear predictive coding again on the residuals, and you have exactly predicted linear signals. Next round, quadratic signals. Or run a higher-order fixed predictor once to do the same.
Such fixed predictors are listed in Tony Robinson's SHORTEN technical report, yours in Eq. 4, and are also included in the FLAC lossless audio codec although not often used. Calculating the best prediction coefficients for each data block and storing them in a header of the compressed block results in better compression than the use of fixed predictors.
The linear predictor is supposed to do the whitening, making the residuals independent. In lossless compression, what is left to do is to entropy code the residuals, instead of using run-length or other symbol-based encoding that doesn't work so well on noisy signals. Typically, entropy coding assigns longer code words to large residuals, approximately minimizing the mean encoding length for an assumed distribution of the residual values. A variant of Rice coding compatible with signed numbers is typically used, as is done in FLAC, see source code of FLAC__bitwriter_write_rice_signed
. The Rice code has a distribution parameter that needs to be optimized for the data block and saved in the block header.
edited 15 hours ago
answered yesterday
Olli NiemitaloOlli Niemitalo
8,5361638
8,5361638
$begingroup$
as similar to your suggestion, Subband ADPCM would possibly be the best choice...
$endgroup$
– Fat32
yesterday
add a comment |
$begingroup$
as similar to your suggestion, Subband ADPCM would possibly be the best choice...
$endgroup$
– Fat32
yesterday
$begingroup$
as similar to your suggestion, Subband ADPCM would possibly be the best choice...
$endgroup$
– Fat32
yesterday
$begingroup$
as similar to your suggestion, Subband ADPCM would possibly be the best choice...
$endgroup$
– Fat32
yesterday
add a comment |
$begingroup$
That's used a lot. See for example https://en.wikipedia.org/wiki/Delta_encoding, https://en.wikipedia.org/wiki/Run-length_encoding.
"Looking Smooth" typically means "not a lot of high frequency content". The easiest way to take advantage of this, is to figure out what the highest frequency really need then low-pass filter and choose an lower sample rate.
IF you signal has a non-flat spectrum, it's typically advantageous to "whiten" the signal, i.e. filter it so that the average spectrum is white, then encode, decode and filter with the inverse signal to recover the signal. This way you spend more bits on the high energy frequencies and less and the low energy ones. Your quantization noise follows the spectrum of the signal.
The scheme that you suggest is one of the simplest forms of this approach: your whitening filter is a differentiator and your inverse filter is an integrator.
$endgroup$
add a comment |
$begingroup$
That's used a lot. See for example https://en.wikipedia.org/wiki/Delta_encoding, https://en.wikipedia.org/wiki/Run-length_encoding.
"Looking Smooth" typically means "not a lot of high frequency content". The easiest way to take advantage of this, is to figure out what the highest frequency really need then low-pass filter and choose an lower sample rate.
IF you signal has a non-flat spectrum, it's typically advantageous to "whiten" the signal, i.e. filter it so that the average spectrum is white, then encode, decode and filter with the inverse signal to recover the signal. This way you spend more bits on the high energy frequencies and less and the low energy ones. Your quantization noise follows the spectrum of the signal.
The scheme that you suggest is one of the simplest forms of this approach: your whitening filter is a differentiator and your inverse filter is an integrator.
$endgroup$
add a comment |
$begingroup$
That's used a lot. See for example https://en.wikipedia.org/wiki/Delta_encoding, https://en.wikipedia.org/wiki/Run-length_encoding.
"Looking Smooth" typically means "not a lot of high frequency content". The easiest way to take advantage of this, is to figure out what the highest frequency really need then low-pass filter and choose an lower sample rate.
IF you signal has a non-flat spectrum, it's typically advantageous to "whiten" the signal, i.e. filter it so that the average spectrum is white, then encode, decode and filter with the inverse signal to recover the signal. This way you spend more bits on the high energy frequencies and less and the low energy ones. Your quantization noise follows the spectrum of the signal.
The scheme that you suggest is one of the simplest forms of this approach: your whitening filter is a differentiator and your inverse filter is an integrator.
$endgroup$
That's used a lot. See for example https://en.wikipedia.org/wiki/Delta_encoding, https://en.wikipedia.org/wiki/Run-length_encoding.
"Looking Smooth" typically means "not a lot of high frequency content". The easiest way to take advantage of this, is to figure out what the highest frequency really need then low-pass filter and choose an lower sample rate.
IF you signal has a non-flat spectrum, it's typically advantageous to "whiten" the signal, i.e. filter it so that the average spectrum is white, then encode, decode and filter with the inverse signal to recover the signal. This way you spend more bits on the high energy frequencies and less and the low energy ones. Your quantization noise follows the spectrum of the signal.
The scheme that you suggest is one of the simplest forms of this approach: your whitening filter is a differentiator and your inverse filter is an integrator.
answered yesterday
HilmarHilmar
10.5k1218
10.5k1218
add a comment |
add a comment |
$begingroup$
Another notion you might wanna look into for lossless compression of a bandlimited signal (it's this bandlimiting that gets you this "smoother ... signal, ...closer ... to the baseline") is Linear Predictive Coding.
I think this is historically correct that LPC was first used as a variant of Delta coding where the LPC algorithm predicts $hat{x}[n]$ from the set of samples: $x[n-1], x[n-2], ... x[n-N]$. If the prediction is good, then the real $x[n]$ is not far off from the prediction $hat{x}[n]$ and you need store only the delta $x[n]-hat{x}[n]$ which is smaller in magnitude and a smaller word width might be sufficient. You would need to store the LPC coefficients for each block, but there are usually no more than a dozen or so of these.
This stored difference value can be compressed further using something like Huffman coding in which you would need to either store the "codebook" along with the compressed data or have some kinda codebook standardized so that both transmitter and receiver know it.
I think it's some combination of LPC and Huffman coding that is used by various lossless audio formats. Maybe there is some perceptual stuff used to, to get almost lossless compression.
$endgroup$
add a comment |
$begingroup$
Another notion you might wanna look into for lossless compression of a bandlimited signal (it's this bandlimiting that gets you this "smoother ... signal, ...closer ... to the baseline") is Linear Predictive Coding.
I think this is historically correct that LPC was first used as a variant of Delta coding where the LPC algorithm predicts $hat{x}[n]$ from the set of samples: $x[n-1], x[n-2], ... x[n-N]$. If the prediction is good, then the real $x[n]$ is not far off from the prediction $hat{x}[n]$ and you need store only the delta $x[n]-hat{x}[n]$ which is smaller in magnitude and a smaller word width might be sufficient. You would need to store the LPC coefficients for each block, but there are usually no more than a dozen or so of these.
This stored difference value can be compressed further using something like Huffman coding in which you would need to either store the "codebook" along with the compressed data or have some kinda codebook standardized so that both transmitter and receiver know it.
I think it's some combination of LPC and Huffman coding that is used by various lossless audio formats. Maybe there is some perceptual stuff used to, to get almost lossless compression.
$endgroup$
add a comment |
$begingroup$
Another notion you might wanna look into for lossless compression of a bandlimited signal (it's this bandlimiting that gets you this "smoother ... signal, ...closer ... to the baseline") is Linear Predictive Coding.
I think this is historically correct that LPC was first used as a variant of Delta coding where the LPC algorithm predicts $hat{x}[n]$ from the set of samples: $x[n-1], x[n-2], ... x[n-N]$. If the prediction is good, then the real $x[n]$ is not far off from the prediction $hat{x}[n]$ and you need store only the delta $x[n]-hat{x}[n]$ which is smaller in magnitude and a smaller word width might be sufficient. You would need to store the LPC coefficients for each block, but there are usually no more than a dozen or so of these.
This stored difference value can be compressed further using something like Huffman coding in which you would need to either store the "codebook" along with the compressed data or have some kinda codebook standardized so that both transmitter and receiver know it.
I think it's some combination of LPC and Huffman coding that is used by various lossless audio formats. Maybe there is some perceptual stuff used to, to get almost lossless compression.
$endgroup$
Another notion you might wanna look into for lossless compression of a bandlimited signal (it's this bandlimiting that gets you this "smoother ... signal, ...closer ... to the baseline") is Linear Predictive Coding.
I think this is historically correct that LPC was first used as a variant of Delta coding where the LPC algorithm predicts $hat{x}[n]$ from the set of samples: $x[n-1], x[n-2], ... x[n-N]$. If the prediction is good, then the real $x[n]$ is not far off from the prediction $hat{x}[n]$ and you need store only the delta $x[n]-hat{x}[n]$ which is smaller in magnitude and a smaller word width might be sufficient. You would need to store the LPC coefficients for each block, but there are usually no more than a dozen or so of these.
This stored difference value can be compressed further using something like Huffman coding in which you would need to either store the "codebook" along with the compressed data or have some kinda codebook standardized so that both transmitter and receiver know it.
I think it's some combination of LPC and Huffman coding that is used by various lossless audio formats. Maybe there is some perceptual stuff used to, to get almost lossless compression.
answered yesterday
robert bristow-johnsonrobert bristow-johnson
11.2k31751
11.2k31751
add a comment |
add a comment |
Thanks for contributing an answer to Signal Processing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdsp.stackexchange.com%2fquestions%2f56470%2fcompress-a-signal-by-storing-signal-diff-instead-of-actual-samples-is-there-su%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
$begingroup$
See en.wikipedia.org/wiki/…
$endgroup$
– MBaz
yesterday
$begingroup$
@MBaz I think your comment contains the correct answer. If you write it down I would most probably accept it. Thanks for now!
$endgroup$
– heltonbiker
yesterday
1
$begingroup$
BTW: this is also done in image compresion, in PNG format, line by line (only that for each line you can choose among using difference with respect to the pixel left or up, or other two predictions - or none of them); the standard calls this "filtering", but it's actually a typical "predict and code the prediction error" scheme, of which your technique is a basic case en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering
$endgroup$
– leonbloy
7 hours ago