The whole story started when my collage Sergei Rastegaev was “lucky” to spoil three discs while recording a video file downloaded from the net. The movie was recorded on 3 Verbatim Datalife Plus CD-R, the verification in Nero Burning Rom failed on all three discs at the same point. The attempt to copy the file onto the hard drive caused a reading error at the 78% mark. The discs were good, we had no problems with other files, and we could witness the same when recording it on other PCs with other recording devices. That was really strange.

Starting the research

First of all I opened the video file to see what happens at 78%. The file was corrupted at that point. I restored it with VirtualDub, and then recorded the new version and the corrupted one next to it on CD. As a result, the new version read well, while the corrupted had a read error. It was clear that data in the file influenced either its reading or its recording. I didn’t know for sure whether there was an answer to that problem and I searched for the respective info on the net. When I failed to find anything interesting I addressed the guys at It turned out that some people faced it before but none had taken it seriously. I sent out badvideo.avi with those magical data to all guys who wanted to make sure that there was a sequence of bytes that couldn’t be correctly read after recording. They guys tested that strange file for reading and writing using various software and hardware. That was an important stage because it wasn’t clear yet whether the problem was in software or hardware.

First versions

The first tests had very interesting results.

1. Some drives read the corrupted file without any errors (see the table). Since not all the drives have the read error, the problem isn’t in data recording, though it needs to be verified yet.

2. The file readability depends on the mode it was recorded in. If it’s recorded in Mode 1, its reading fails, if it’s Mode 2/XA or UDF, the file reads well.

From the very beginning of our discussion one of the guys – abgm – suggested that the problem arose at the EFM encoding stage.


EFM encoding is encoding with the 8/14 redundant code (Eight to Fourteen Modulation – EFM), where the source bytes are encoded with 14bit words that increases noise-immunity due to the excessiveness; besides, it provides the hardware compression of data stored on CD. Three link bits are put in between 14bit words to uphold the restrictions concerning quantity of adjoining zeros and ones – it simplifies synchronization and reduces the constant signal component since the drive is synchronized by pit-land transition. If the gap between two such transitions is excessively great the drive can lose synchronization because rotation of a given disc is not ideal.abgm cites Andy McFadden’s CD-Recordable FAQ: ” If there is more than one possibility to place unifying bits which satisfy the requirements for the length of sections and sync groups, the group that minimizes the low-frequency signal components is preferred. It is obtained by minimizing the digital sum value (DSV) which can be calculated by adding 1 to every time T after the transition from pit to land or by subtracting 1 from every T after the transition to pit. Adding 1 to the unifying bits inverts the signal causing a transition from pit to land or vice versa. The DSV minimization is very important as low-frequency signals may not let the laser precisely focus and position on the track. (Presumably, such signals hamper searching of counts on the spiral track of the CD recorded).

We supposed that there are data sequences which let the drive lose synchronization by the recorded signal. Such sequences do exist and are called weak sectors, and their principle of operation is based on DSV corruption. To check the first version we had to study the technology of signal encoding on CD but we didn’t have enough documentation to prove or disprove that version.

When we found out that the file recorded on CD in Mode 2/XA read well, we got one more version. Since the EDC/ECC data (error correction code and error detection code) in Mode 1 and Mode 2/XA

Mode 1


Mode 2/XA



are differently calculated, we supposed that there can be an error in the EDC/ECC calculation algorithm which shows up for certain data sequences recorded in Mode 1.

It was relatively simple to check the second version as there is special software that reads RAW data of the sectors and calculates EDC/ECC on the software level. We scanned the CD with the file on LiteOn’s drive that can’t read it with CDDoctor and KProbe, and it didn’t reveal any errors. Since these programs handle the CD on the low level, some of the discussers were inclined to think that there was an error of the EDC/ECC level. We tested RAW reading ignoring EDC/ECC errors with such software as CloneCD on the non-reading drives, but there were sector read fault errors anyway. It proved that the error wasn’t connected with EDC/ECC calculation. The sector read fault errors differed for different drives.

Also, we had to check the third version: errors in software, in particular, in recording software or device drivers. So, we had to test reading and writing with different driver versions, different recording software and in non-Windows OSs. The participants found out that the results didn’t depend on recording software provided that the data were recorded in Mode 1. It didn’t depend on the write technique either: Track-At-Once, Disk-At-Once or Disk-At-Once 96. The error appeared in Linux as well. That is why we could relieve Nero developers of responsibility.

A bit later Aleksei Polukarov (aka ANPolter) suggested the fourth version that some drives could interpret scratched areas as sequences of a certain EFM code where the number of 0’s exceeds the number of 1’s (the table contains such codes) and try to decode them into user data. Recording of the data back on CDs results in incorrectly interpreted EFM codes. It’s a kind of a weak sector. Aleksei supposed that the CD with the source video file was read on such drive.

The main argument against the version about a long sequence of zeros on CD was the fact that EFM encoding must not give a birth to a sequence with more than 10 zeros because of three “rarefying” unifying bits. But weak sectors exist, and it’s impossible to disprove this version completely. No one objected to Aleksei’s suggestion that the “magical” data sequence was read from the corrupted disc. Later we will see whether it can really happen.


So, we had to check all the versions. For that purpose we had to carry out mathematical conversions with the data and estimate the result, but for that we had to find the data packet that influenced CD reading.

First off, the 1MB area was cut into 2KB parts (in sectors) and the parts were recorded in series. I noticed that the sectors didn’t read in pairs. It means that either the byte sequence lies on the border of two sectors, or one unreadable sector makes the other unreadable as well.

By increasing the fragment size up to 4 KB I had to separate the fragments which are readable alone. I had to look for the magical sequences exactly in these files. For that purpose we developed a special program and we applied a 4KB fragment to its input and get a file set at the end, into which the content of the input file in projected byte by byte. When I recorded all these files onto CD-RW I could find out when the byte sequence that influences file unreadability starts and finishes. Bingo! I found the first signature that starts from the 8th byte from the file beginning (numeration starts with 0): A8 FD 01 7E 7F 9F 9F D7 D7 E1 61 88. Then I decided to find out whether a location of this signature influences file readability by creating 4096 files and shifting the signature by 1 byte every time. The test showed that the signature had an effect at a definite sector point.

This sequence used on any sector of any file caused failure in the file reading on the non-reading drives (see the table). The beginning of the 2048-byte sector inside the file containing this signature that causes the read fault error looks like this:

XX XX XX XX XX XX XX XX A8 FD 01 7E 7F 9F 9F D7 

A bit later when I continued to sift the source file I found some more signatures that had the same characteristics:

  • All signatures found were 12 bytes long. If you look at the sector format (fig. 1), you will see that the sector sync header (Sync) is also 12 bytes long.
  • The signature makes an effect only at a definite sector point.
  • All signatures found start from the address divisible by 8.

While stubbing the data recorded I ascertained that the signature doesn’t affect readability of the sector it’s located in but it prevnts reading of two following sectors on the Teac CD-W524 drive (that is why the sectors didn’t read in pairs). One of the other discussers found out that on the Sony CRX-225E drive the signature prevents reading of the sector it’s recorded in but others sectors read well. It means that the signature makes a different effect on the non-reading drives from different manufacturers. It also proves the idea that the drive loses synchronization because of some combination of factors.

With weak_detector which searches for weak sectors in ISO images and on CDs we tested the ISO image containing badvideo.avi created with WinISO. Weak_detector didn’t find any errors there. After that the ISO image was recorded onto CD-RW and we got an unreadable file. It showed that the signatures found had nothing in common with the sequences used for creation of weak sectors. Besides, the sequences for creation of weak sectors are longer than 12 bytes.

The tests in other operating systems and recording software prove once again that readability of the file with the signatures doesn’t depend either on a recording device, or on a media type or a recording utility, but it depends on the drive that reads data.


All the tests brought very intersting results but it seemed that we didn’t approach the solution. Taking into account that the signature works only at a definite sector area Aleksei Polukarov said that it couldn’t be different because it had to pass the scrambler, and after the scrambler all the signatures look alike.

Aleksei carried out one interesting experiment. If you compare the CD sector format in Mode1 and Mode2/XA Form1 you will see that the beginning of the user data in Mode2/XA Form1 sector is shifted by 8 bytes relative to Mode1. He shifted the signature inside the file by 8 bytes so that the beginning of the 2048 sector looked as following:

A8 FD 01 7E 7F 9F 9F D7 D7 E1 61 88 XX XX XX XX 

He recorded the file in Mode 2/XA, and the problem arose in Mode 2/XA as well!

Let’s see what a scrambler is.

scrambler1 scrambler2

Scrambling is an operation of congruence addition of 2 (or exclusive OR) sector data with the pseudorandom numbers table. Scrambling is needed to convert regular sequences of numbers which can be often found in data into a sequence of random numbers to improve the probabilistic characteristics of the output signal. However it’s also possible that random user data after the scrambler turn into a homogeneous template or even into the same sequence of bytes.

We had to find out what caused the problem, and that is why we had to repeat the data conversion stages on the software level and study the results. Aleksei’s idea about the scrambler was interesting and probably hid the clue.

I wrote a function that converts user data with a scrambler according to the same algorithm used before CD recording and applied this function to all the files with the signatures. All the signatures are alike after the scrambler. The sequence 00 FF FF FF FF FF FF FF FF FF FF 00 is exactly the sector sync header used for synchronization and detection of the sector start in old CD-ROM models. As you can see, even latest models of CD-ROM(R/RW) and DVD-ROM(R/RW) drives also often use this outdated sync header for detection of the sector beginning!

So, the error was caused by the fact that user data turned into the same sequence which signified the sector beginning after their conversion before recording on CD. Many drives can’t tell a false sign of the sector start from a real one. The clue was very simple. Aleksei said: “I didn’t expect the drives to be such dull. I had such idea at the beginning but then I read that a sector header was used only in first CD-ROM drives the electronic parts in which were based on chips from audio devices with a solid decoding stream; that is why I didn’t even check that version”.

If we suppose that the drive expects the frame start from any position, 2048 – 11 = 2037 signatures can exist. We must also account for the fact that a signature can be found in EDC/ECC data. The software developers could easily write a function to check user data beefier CD burning. But you shouldn’t be much afraid of data with signatures.

If you estimate probability of getting a definite number 12 bytes long into a definite sector part, it will be negligibly small 1/(2^96)=1.3^-29. Taking into account probability of existence of ~2300 signatures in different sector parts, the probability of false sync headers is so small (3^-25) that this specific error can never occur during the history of CD media. Presence of several signatures 12 bytes long in one file proves Aleksei’s version that the sequences came to the file from the corrupted disc because such coincidence is almost impossible! The probability to damage a disc is higher than to encounter such data. The drive developers knew that and probably allowed such synchronization fault.

Secondly, you can always find a drive to read important data with and rewrite them into another format (for example, from Mode1 to Mode2/XA) or onto another media type (e.g. DVD).

In addition to the research we also checked whether the drives that successfully read the 1MB fragment of the corrupted video file use sync headers as well. We generated a 4MB file that contained signatures going one after another. The file was recorded and tested on several drives. It turned out that some drives (for example, Nec) couldn’t read badfile.bin separately but could read it at reading the CD image. Obviously, the sync header is used as an auxiliary marker when searching for the sector start but it’s not used at further reading. The other drives couldn’t read the disc with badfile.bin at all, and it means that they also use a sync header but to less degree compared to the drives which do not read files that contain even single signatures. But we also had drives that could read badfile.bin without errors.


Well, we can see that the hardware manufacturers save on users. The signs of such disease were found in the drives of such companies as Teac, Sony (LiteOn), Plextor. Look at the table. Over 2/3 of the drives tested fail to read the file if it contains a signature that turns into a data sequence identical to Sync Header! Except Toshiba and HP, all manufacturers use sync header as a key sign of the sector start at data reading. Such attitude suggests that manufacturers can use some more tricks. I hope this review will change the sitaution for better and make the optical drive makers update firmware as soon as possible.