Read more about sound conversion settings. What is bitrates? How do they affect the quality of music and video? What is better than 128 or 320 kbps

Have you ever wondered what exactly is lost when compressed from Lossless in MP3 128 kbps or 320 kbps?
I checked and the result seemed interesting. First of all, I propose a survey to understand for yourself to hear whether the difference is in general. If you are not sure that you hear or confident that you do not hear, I propose a simple and elegant idea to your attention: you need to take two forehead two sound waves One of which will be in the antiphase, respectively, in the information of the two tracks, it will be advantageously audible what is not redeemed. Interesting charts do not yet promise, but you can hear themselves on our system what kind of sounds are lost when compressed from FLAC in MP3 128 \\ 320 kbps, an archive with examples at the end of the article.
Interview
You need to download and listen to 12 tracks for 30 seconds. Then to specify for each of the 4 compositions one of 3 options (128 Kbps, 320 kbps or lossless).
The survey is anonymous, but you can specify a unique hash and tell it to me, as a last resort, publish your opinion here, but necessarily under the spoiler. The survey will last until 25.02, after the key and statistics.
Files on Yandex disk, mirror on dropbox (~ 80MB).
Initial data
The Black Keys - Everlasting Light (FLAC, 44100 Hz, 24-bit, 1613 kbps), you can read YouTube..
Ludovico Einaudi - Drop (FLAC, 96000 Hz, 24-bit, 2354 KBPS), you can read Yandex music.
CC Coletti - Rock and Roll (FLAC, 192000 Hz, 24-bit, 4845 kbps), you can read on YouTube..
Annihilator - Ultra-Motion (FLAC, 44100 Hz, 16-bit, 1022 kbps), you can read on YouTube..
Conversion parameters in mp3
44.1Khz, Stereo, 128 kbps or 320 kbps
Description of the experiment
The source files are cut into pieces of 10 seconds, each of the pieces is exported to WAV. After importing the obtained tracks to the beginning of each, 2 seconds of silence and a second tone signal are added, then converted to MP3. After importing MP3 files, it turns out that relative to the original, the resulting file "went ahead". This is not a bug, it is. We perform synchronization with respect to the specified tone signal from the original (I tried for each MP3 file several values, which subsequently clarified to the best result), get rid of the tone signal, silence and the trades obtained are exported to WAV. Now it remains to invert the tracks in order to get multidirectional peaks, and reduce the original.
Result
I will not open America ... Yes, there is a difference. Yes, especially when compressed to 128kbps. Yes, depends on the music. Yes, even more from the audio code.
Make an independent output and hear the difference you can download the files on

Nowadays, there are a lot of conversations that we have lost real music with the arrival of compressive audio formats, such as MP3, AAC and similar. Is it true? Will Lossless formats save? Can an unprepared listener to distinguish music in mp3 formats from FLAC? Let's figure it out in this matter.

What is bitrate (Bitrate)?

Probably you have already heard the term "bitrate" before, and you probably have a general idea of \u200b\u200bwhat it means, but perhaps it is not bad to get acquainted with her official definition so that you know how it all works.

Bitrate - this is the number of bits or quantity of data that is processed during certain period time. In audio, this usually means kilobit per second. For example, the music you buy in iTunes is 256 kilobit per second, that is, every second song contains 256 kilobytes of data.

The higher the path bit rate, the more space it will take on your computer. As a rule, the audio CD takes a pretty much space, so it became normal practice to compress these files so that you can record more music to your hDD (or iPod, Dropbox or something else). It is here that the formats "without losses" and "with losses" come into dispute.

Lossless and Lossy Formats: What is the difference?


When we say "without loss", we mean that we really did not change the source file. That is, we copied the track from the CD to our hard disk, but did not squeeze it to such an extent that we lost any data. This is essentially the same as the original CD track.

However, most often you probably copy your music in Lossy format. That is, you took a CD, copied it to the hard disk and squeezed the tracks so that they do not occupy a lot of space. A typical album probably takes 100 MB or so. The same album in a loss format, such as (also known as Apple Lossless), will take about 300 MB, so it has become common practice to use loss formats for more fast download and greater hard disk savings.

The problem is that when you squeeze a file to save space, you delete pieces of data. Just as when you take a high-quality image, and compress it in JPEG, your computer takes the source data and "deceive" certain parts of the image, making it mostly the same, but with some loss of clarity and high-quality.

As an example, take two images below: The right right is clearly compressed, and as a result, the quality has decreased.

Remember that you save a hard disk space squeezing music in Lossy formats, which can be of great importance for iPhone with 32 GB of memory, but according to the amount of volume / quality it is just a compromise.

There are various compression levels: 128 Kbps, for example, occupy very little space, but will also have a low playback quality than a larger 320 kbit / s file, which, in turn, is lower than the quality than the reference file with 1,411 kbps. 1.411 Kbps is the quality of the Audio CD level, which, in most cases, is more than enough.

The whole problem is not how much music is compressed, but on which equipment you listen to it.

Is the bitrate really matters?


Since the memory every year becomes cheaper, listening to sound with a higher bit rate, or at all in Lossless formats, starts becoming more and more popular. But is it worth it, effort and busy memory in your phone or computer?

I do not like to answer questions in this way, but unfortunately, the answer is: it depends.

Part of the equation is the equipment you use. If you use a high-quality pair of headphones or speakers, you are accustomed to a large frequency and dynamic ranges. Thus, you will most likely notice the shortcomings that occur when compressing music into files with a lower bit rate. You may notice that in low-quality MP3 files there is no certain level of detail; Thin background tracks can be more difficult for perception, the upper and low frequencies will not be so dynamic, or you can hear distortions in the Solist's vocal. In these cases, you may need a track with a higher bit rate.

However, if you listen to your music using a pair of cheap headphones on your iPodYou probably won't notice the difference between the 128 kbps file and the 320 kbit / s file, not to mention the music without loss of 1.411 kbps. Remember when I showed you a multiple paragraph above and noted that you probably had to be peering into it to see the shortcomings? Your headphones are like a truncated version of the image: they will make these disadvantages difficult for perception, as they are physically able to play music as needed.

Another part of the equation, of course, your own ears. Some people can be very difficult to distinguish two different bitrate for a simple reason - they listen little listen music. Hearing skill, like any other, develops with practice. If you often listen to a lot of your favorite music, your rumor becomes more accurate and starts to capture small details and halftone. But until then does not matter what bitrate do you use?

So what format and bitrate do you have to choose for yourself? Will you have 320 kbps, or do you need a lossless format?

The fact is that it is difficult to hear the difference between the loss file and the 320 kbps MP3 file. To hear the difference, you will need a serious high-quality equipment, good hearing and a certain type of music (for example, classic or jazz).

For the overwhelming majority of people, 320 kbps / with more than enough to listen.

What else to take into account?


Music, recorded, can be useful. Files in Lossless format are more reliable in the future, in the sense that you can always squeeze them to the lossy format when you need it, but you cannot do on the contrary and restore the original CD quality from MP3 file. This, again, one of the fundamental problems with online music stores: If you have created a huge music library in iTunes and one day decided that you need more bitrate, you will have to buy it again, but only this time in CD format .

When it is possible, I always buy or copy music in Lossless format for backup purposes.

I understand that audiophiles, it's like a needle under nails. As I said, it all depends on you, your hearing and equipment that you have.

Compare two track recorded in Lossless and Lossy formats. Try several different audio formats, listen to their some time and watch whether there will be a difference for you or not.

In the worst case, you spend a few hours on listening to your favorite music - not so scary, right? Enjoy it!

Pros and cons MP3 128 kbps

Compression of audio data is a complicated thing. Nothing can be said in advance ... The most common format for today is MPEG Layer3 with a stream of 128 kbps - provides quality that at first glance is no different from the original. It is also called frivolous - "CD-quality". Nevertheless, almost everyone knows that many people are gate the nose from such "CD quality." What is wrong? Why this quality is not enough? Very difficult question. I am an opponent of compression of 128 kbps, since the result is sometimes a stupid. But I have a number of records in 128 kbps, to which I practically can't find fault. Whether the stream 128 is suitable for encoding one or another material - it turns out, unfortunately, only after repeatedly listening to the result. It is not possible to say anything in advance - personally do not know the signs that would allow in advance to determine the successfulness of the result. But often the stream 128 is completely enough for high-quality music coding.

For encoding in 128 kbps, it is best to use products from Fraunhofer - MP3 PRODUCER 2.1 or later. In addition to MP3ENC 3.0 - it has an annoying error that leads to very poor high frequency encoding. Versions above 3.0 do not suffer from this disadvantage.

First of all, a bit of general words. The perception of the sound picture by a person very much depends on the symmetric transmission of two channels (stereo). Different distortions in different channels - much worse than the same. Generally speaking, ensuring as much as possible sound characteristics in both channels, but meanwhile different material (Otherwise, what a stereo) is a big sound recording problem, which is usually underestimated. If we can use 64 kbit / s to encode mono, then for coding stereo in simply two channels, we will not have enough 64 kbps per channel - stereo result will sound much more incorrect than each channel separately. In most Fraunhofer products, the limit for mono is 64 kbps - and I have not yet seen a mono record (clean record - without noise or distortion), which would require a larger stream. For some reason, our addiction to monophonic sound is much more weak than to stereo - apparently, he is simply not perceived by us seriously :) - From a psychoacoustic point of view, it is simply a sound outgoing from the column, and not an attempt to complete some sound Pictures.

An attempt to transmit stereo signals makes much more stringent requirements - in the end, did you ever heard about a psychoacoustic model that takes into account the masking of one channel to others? Some reverse are also ignored, for example, the effects - for example, a stereo effect that is designed for both channels immediately. Separately taken left channel masks his own part of the effect - we will not hear it. But the presence of the right channel is the second part of the effect - it changes our perception of the left channel: we subconsciously expect to hear the left part of the effect, and this change of our psychoacoustics should also be considered. With a weak compression - 128 kbps on a channel (total 256 kbps) these effects are approaching no, since each channel is presented quite fully in order to overlap the need for transmission symmetricality, but for threads about 64 kbps on the channel is a big problem - the transfer of thin nuances of the joint The perception of both channels requires more accurate transmission than that today is possible in such streams.

Of course, it was possible to make a full-fledged speaker model for two channels, but the industry went through another path, which is generally equivalent to this, but much easier. Many algorithms with the general name of the JOINT STEREO - partial solution of the problems described above. Most algorithms are reduced to the fact that the central channel and the difference channel is allocated - MID / SIDE STEREO. The central channel carries the main audio information and is a regular mono channel formed from two source channels, and the difference in the rest of the information that allows you to restore the original stereo sound. By itself, this operation is completely reversible - it is just another way to represent two channels, with which it is easier to work when compressing stereo information.

Further, it usually compresses the compression of a separate central and difference channel, while using the fact that the difference channel in real music is relatively poor - both channels have a lot in common. The balance of compression in favor of the central and difference channel is chosen on the go, but mostly a much greater stream stands out to the central channel. Complex algorithms decide that we are in this moment It is preferable - a more correct spatial picture or the quality of transmitting common to both information channels, or simply compression without MID / SIDE stereo - that is, in the dual channel mode.

Oddly enough, but stereo compression is the weakest place of compression results in Layer3 128 Kbps. It is impossible to criticize the creators of the format - it is still a little possible evil. Thin stereo information is almost not perceived consciously (if you do not take into the attention of obvious things - the coarse layout of the instruments in space, artificial effects, etc.), so the quality of stereo is estimated by a person last. Usually something always does not allow to get to this: Computer columns, for example, make much more substantial disadvantages, and to such sublications as the wrong transmission of spatial information is simply not reaching.

It is not necessary to think that what does not allow to hear this lack of computer acoustics is that the columns are arranged at a distance of 1 meter, on the sides of the monitor, without creating a sufficient stereo thase. The point is not even in this .. firstly, if it comes to such columns, the man sits right in front of them - and it creates the same effect as the columns in the corners of the room, and even more: on normal acoustics and good volume you almost You can never highlight the exact spatial location of the sounds (it is not about the sound picture, which, on the contrary, computer speakers will never be built, but about the direct, conscious, perception of differences between the channels). Computer columns (standard use) or headphones give a much clearer direct perception of stereo than ordinary musical acoustics.

Reliable to say - for the direct, informative sound perception, we are not very necessary accurate stereo information. Directly detect the difference in this aspect between the original and the Layer3 128 Kbps rather difficult, although it is possible. Need a lot of experience, or strengthening the effects of interest. The simplest thing that can be done is to virtually disseminate the channels farther than it is possible physically. It is usually this effect that turns on in cheap computer technician 3D SOUND button. Or in boom boxes whose columns are not separated from the device body and are too weak for transmission of a beautiful stereo natural way. There is a transition of spatial information into specific audio information of both channels - the difference between the channels increases.

I applied a stronger effect than it is usually accepted to better hear the difference. Look at how it should sound - after coding in 256 kbps with a double channel (256_channels_wide.mp3, 172 kb), and how it sounds after encoding in 128 kbps with Joint Stereo (128_channels_wide.mp3, 172 kb).

Retreat. Both of these files are MP3 from 256 kbps, encoded using MP3 PRODUCER 2.1. Do not confuse: I, first, test mp3, and secondly - I post the results of the mp3 testing mp3;). It was all like this: at first I encoded the passage of music in 128 and 256. Then we split these files, applied processing (stereo expander), squeezed in 256 - only for saving space - and laid out here.

By the way, only at 256 kbps in MP3 PRODUCER 2.1 turns off Joint Stereo and turns on Dual Channels - two independent channels. Even 192 Kbps in Producer 2.1 is some kind of Joint Stereo, because my examples were very incorrectly compressed in less than 256 kbps. This is the main reason that "full" quality begins with 256 kbps - historically developed that any smaller stream in standard commercial products from Fraunhofer (up to 98 years) is Joint Stereo, which in any case is unacceptable for fully correct Transmission. Other (or late) products, in principle, allow you to arbitrarily choose - Joint Stereo or a double channel - for any thread.

About results

In the original (which this case It is precisely 256 kbps) we have heard a sound with a reinforced difference channel and weakened central. Very well, a reverb of voice was heard, as in general, all sorts of artificial reverb and echo - these spatial effects are mainly in the difference channel. To speak specifically, in this case there were 33% of the central channel and 300% difference. The absolute effect is 0% of the central channel - turns on on the equipment type music centers "Karaoke Vocal Fader", "Voice Cancelation / Remove" or similar, the meaning of which is to remove the voice from the phonogram. The meaning of the operation is that the voice is usually recorded only on the central channel - the same presence in the left and right channel. Removing the central channel, we remove the voice (and a lot more, so this feature in real life Quite useless). If you have such a thing - you can listen to your mp3 with it - a funny Joint Stereo detector is obtained.

On this example, you can already indirectly understand that we have lost. First, it became noticeably worse than all spatial effects - they just lost. But in the second - bouffagon is the result of the transition of spatial information into sound. What it corresponded in space - yes, just all the time almost randomly moving sound components, a certain "spatial noise", which was not in the original phonogram (it can withstand at least the transition of spatial information into soundlessness without the appearance of foreign effects). It is known that this type of distortion when encoding in low flows often appear and directly, without any additional treatments. Just immediate sound distortion (which is almost always no) perceived consciously and immediately, and stereo (which, with Joint Stereo, there is always in large quantities) - only subconsciously and in the process of listening for some time.

This is the main reason that does not give the sound of Layer3 128 Kbps to be considered complete CD quality. The fact is that in itself the transformation of the stereo sound in mono gives strong negative effects - often the same sound is repeated in different channels with a small delay that when mixed, it gives just blurred sound in time. Mono sound made of stereo sound sounds much worse than the original mono record. The difference channel, in addition to the central (mixed mono channel), gives a complete reverse separation to the right and left, but partial absence of a difference channel (insufficient coding) brings not only an insufficient spatial picture, but also these unpleasant effects of mixing stereo sound in one mono channel.

When all other obstacles are eliminated - the equipment is good, the tonal color and the dynamics are unchanged (the stream is enough for coding the central channel) - it will still remain. But there are phonograms recorded in such a way that the negative effects of compression based on MID / SIDE STEREO are not manifested - and then 128 kbit / s gives the same full quality as 256 kbps. Private case - The phonogram may be rich in the meaning of stereo information, but poor sound information - for example, a slow game on the piano. In this case, for encoding a difference channel, the stream is completely sufficient for transmitting accurate spatial information. There are more difficult cases - an active arrangement, filled with various tools, nevertheless, sounds 128 kbps very good - but this is rarely found, may in one case out of five to ten. However, it is found.

Actually sound. It is difficult to highlight the direct sound of the central channel sound in Layer3 128 kbps. The lack of frequency transmission is higher than 16 kHz (by the way, it is very rare, but still transmitted) and a certain decrease in amplitude is very high - strictly speaking in itself - just nonsense. A person in a few minutes completely gets used to and not to such tonal distortions, it simply cannot be considered strong negative factors. Yes, it is distortion, but for the perception of "full quality" - far back. On the part of the central, directly sound, the channel is possible by other kind - a sharp limitation of the available stream for encoding this channel, caused by simply bytensive circumstances - very abundant spatial information loaded with a variety of sounds, frequent inefficient short blocks and as a result of all this - fully consumed reserve Buffer flow. This happens, but relatively rare, and that - if such a fact takes place, it is usually noticeable on large fragments continuously.

Show defects of this kind in an explicit form to notice any person is very difficult. They will easily notice even without handling a person who is used to deal with sound, but for an ordinary non-critical listener it may seem completely indistinguishable from the original sound and some abstract digging in what is not really .. and still look at the example. It was necessary to apply strong processing to its allocation - to significantly reduce the content of medium and high frequencies after decoding. I remove interfering to hear these frequency nuances. We, of course, violate the work of the coding model, but it will help it better understand what we are losing. So - as it should sound (256_Bass.mp3, 172 KB), and what happens after decoding and processing a stream of 128 kbps (128_bass.mp3, 172 kb). Pay attention to the noticeable loss of continuity, the smoothness of the sound of the bass, as well as some other anomalies. Transfer low frequencies In this case, donated in favor of higher frequencies and spatial information.

It should be noted that the work of the acoustic compression model can be observed (with attentive learning and having some experience with sound) and 256 kbps, if you apply a more or less strong equalizer. If you do this and then listen, it will be possible sometimes (quite often) to notice unpleasant effects (ringing / bullhead). More importantly, the sound after such a procedure will have an unpleasant, uneven character, which is very difficult to notice immediately, but it will be noticeably with a long listening. The difference between 128 and 256 is only that in a stream 128 kbps, these effects often exist without any processing. They are also difficult to notice immediately, but they are - an example with bass gives some idea of \u200b\u200bwhere to look for them. To hear it in high threads (above 256 kbps) without processing is simply impossible. This problem does not concern high flows, but there is something that sometimes (very rarely) does not allow even Layer3 - 256 kbit / with the original - these are temporary parameters (more details will be in a separate article later: see Mpeg Layer3 - 256 / Link to another Article /).

There are phonograms that do not concern this problem. The easiest way is to list the factors that, on the contrary, lead to the emergence of the distortion described above. If none of them is completed - there is a big chance to fully successful, in this aspect, coding in Layer3 - 128 kbps. It all depends, however, from a specific material ...

First of all, the noise, let's say hardware. If the phonogram is tangible noise - it is very undesirable to encode into small streams, since too much of the flow goes to encoding unnecessary information, which is also not too reasonable to be reasonable coding with the help of an acoustic model.

  • Just noise - all sorts outsided sounds. Monotonous noise of the city, street, restaurant, etc., against which the main action takes place. This type of sounds give a very abundant flow of information that should be encoded, and the algorithm will be forced to sacrifice something in the main material.
  • Unnatural strong stereo effects. It rather belongs to the previous item, but in any case - too much of the flow goes to the difference channel, and the coding of the central channel is strongly deteriorating.
  • Strong phase distortion, different for different channels. In principle, this relates rather to the flaws of the coding algorithms common at this time than to the standard, but still. The wildest distortions begin because of the complete breakdown of the entire process. To such distortions of the initial phonogram, in most cases, records on cassette equipment and subsequent digitization, especially when playing inexpensive tape recorders with poor-quality reverses. Heads are crooked, the ribbon winds the oblique, and the channels are slightly detained one relative to the other.
  • Just too overloaded record. Quite roughly speaking - the big symphony orchestra plays all over :). Usually, as a result of compression in 128 kbps, something is completely sketchy - chamber, copper, drums, soloist. It is found, of course, not only in the classics.

Another pole is what is usually compressed:

  • Solo tool with relatively simple sound - Guitar, piano. Violin, for example, has a too filled spectrum and sounds usually not very good. From violin violinist actually depends on the work itself. It is also quite well compressed by several tools - bards or SSP, for example (tool + voice).
  • Quality modern making music. This does not mean musical quality, but the sound quality is the reduction, the location of the tools, the categorical absence of complex global effects that decorate sounds and is generally superfluous. In this category, for example, the entire modern pop, also some rock, and in general there are quite a lot of things.
  • Aggressive, "Electricity" music. Well, in order to somehow bring an example - early Metallica (and modern in general, too). [Remember, not about musical styles! just an example.]

It is worth noting that the Layer3 compression almost does not impress the parameters such as the presence / absence of high frequencies, bass, deaf / ringing, etc. There is a dependence, but so weak that you can not take it into account.

Unfortunately (or fortunately?), It rests on the person himself. Many people without preparation and pre-selection hear the difference between flows about 128 kbps and the original, many even synthetic extreme examples are not perceived by hearing as distinction. First you do not need to convince anything, the second examples and do not convince ... One could simply say that someone has a difference, and someone does not have, if it were not for one thing: in the process of listening to music over time, our perception is all Time is improving. Seem good quality Yesterday, tomorrow can no longer seem one - it always happens. And if it is rather pointless (at least in my opinion) to compress in 320 kbps compared to 256 kbps - the winnings are not too important, although it is clear, then to keep music at least 256 kbps still.

In this article, we will clarify the horses of the coding of Aduo, affecting the quality of its sound. Understanding the conversion settings will help you choose the most appropriate sound coding option from the point of view of the file size relationship to the sound quality.

What is a bitrate?

Bitrate is the amount of data per unit of time used to transmit audio stream. For example, a bit rate 128 KBPs is decrypted as 128 kilobit per second and means that 128 thousand bits are used to encode one second of sound (1 byte \u003d 8 bits). If you translate this value in kilobytes, then it turns out that one second sound takes about 16 KB.

Thus, the higher the path of the track, the more space it takes you on your compute. But at the same time, within a single format, a greater bit rate allows you to record sound with higher quality. For example, if you convery audio-CD to mp3, then with a bitrate of 256 kbps, the sound will be significantly better than with a bitrate of 64 kbps.

Since now disk space has become quite cheap, we recommend converting to MP3 with a bit rate not lower than 192 kbps.

Also distinguish constant and variable bitrates.

The difference between the constant bitrate (CBR) from the AC (VBR)

With a constant bitrate for encoding all sections of sound, the same amount of bits are used. But the sound structure is usually different and, for example, for coding of silence, significantly less bits are required than to encode saturated sound. A variable bitrate, unlike the constant, automatically adjusts the coding quality, depending on the complexity of the sound on certain intervals. That is, a lower bit rate will be used for sections of simple coding, and a higher value will be applied for complex. Using a variable bitrate allows you to achieve higher sound quality with a smaller file size.

What is the frequency of sampling?

This concept occurs when converting analog signal In digital and indicates the number of samples (measurement level measurements) per second, which are carried out to convert the signal.

What is the number of channels?

Channel, administratively to audio encoding is an independent sound stream. Mono - one stream, stereo - two streams. N.m is often used to refer to the number of channels, where N is the number of full-fledged sound channels, and M is the number of low-frequency channels (for example 5.1).

Have you ever wondered what exactly is lost when compressed from Lossless in MP3 128 kbps or 320 kbps?
I checked and the result seemed interesting. First of all, I propose a survey to understand for yourself to hear whether the difference is in general. If you are not sure that you hear or sure that you do not hear, I propose a simple and elegant idea to visit your attention: one of which will be in antiphase, respectively, in the information of the two tracks, it will be preferably heard that Not redeemed. Interesting charts do not yet promise, but you can hear themselves on our system what kind of sounds are lost when compressed from FLAC in MP3 128 \\ 320 kbps, an archive with examples at the end of the article.
Interview
You need to download and listen to 12 tracks for 30 seconds. Then to specify for each of the 4 compositions one of 3 options (128 Kbps, 320 kbps or lossless).
The survey is anonymous, but you can specify a unique hash and tell it to me, as a last resort, publish your opinion here, but necessarily under the spoiler. The survey will last until 25.02, after the key and statistics.
Files on Yandex disk, mirror on dropbox (~ 80MB).
Initial data
The Black Keys - Everlasting Light (FLAC, 44100 Hz, 24-bit, 1613 kbps), you can read YouTube..
Ludovico Einaudi - Drop (FLAC, 96000 Hz, 24-bit, 2354 KBPS), you can read Yandex music.
CC Coletti - Rock and Roll (FLAC, 192000 Hz, 24-bit, 4845 kbps), you can read on YouTube..
Annihilator - Ultra-Motion (FLAC, 44100 Hz, 16-bit, 1022 kbps), you can read on YouTube..
Conversion parameters in mp3
44.1Khz, Stereo, 128 kbps or 320 kbps
Description of the experiment
The source files are cut into pieces of 10 seconds, each of the pieces is exported to WAV. After importing the obtained tracks to the beginning of each, 2 seconds of silence and a second tone signal are added, then converted to MP3. After importing MP3 files, it turns out that relative to the original, the resulting file "went ahead". This is not a bug, it is. We perform synchronization with respect to the specified tone signal from the original (I tried for each MP3 file several values, which subsequently clarified to the best result), get rid of the tone signal, silence and the trades obtained are exported to WAV. Now it remains to invert the tracks in order to get multidirectional peaks, and reduce the original.
Result
I will not open America ... Yes, there is a difference. Yes, especially when compressed to 128kbps. Yes, depends on the music. Yes, even more from the audio code.
Make an independent output and hear the difference you can download the files on