patternjavaMinor
Simulation of image transfer over noisy channel
Viewed 0 times
imagesimulationnoisychannelovertransfer
Problem
Following this question, I've tried to rewrite some core methods to avoid using
I think that there's still room for improvement but one of the requirements forces me to work at bit level, and probably Java is not the best choice when having to deal with bits.
I'm talking about the
The
TestLoop
RepetitionCoder.encode
```
public void encode(BitStream in, BitStream out) {
try {
while (in.available() > 0) {
// Read one bit
int currentBit = in.getBits(1);
// Output that bit repetitions times
fo
String for bit operations, and I ended up using BitStream. The code is almost 8x faster than before but it still looks quite slow, because it processes 15 images per minute (0.25 FPS).I think that there's still room for improvement but one of the requirements forces me to work at bit level, and probably Java is not the best choice when having to deal with bits.
I'm talking about the
isError and addError methods, which need to be applied to each bit (basically the noisy channel is described by a bit-level error probability, meaning that each bit must be processed individually).The
BitStream class is from the org.icepdf.core.io package. Feel free to suggest a better alternative if you wish!TestLoop
for (int i = 0; i < 15; i++) {
byte[] input = Files.readAllBytes(new File("D:\\testFrame.jpg").toPath());
ByteArrayInputStream bis = new ByteArrayInputStream(input);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
// Repetition coder
RepetitionCoder repCoder = RepetitionFactory.createRepetitionCoder(5);
repCoder.encode(new BitStream(bis), new BitStream(bos));
bis = new ByteArrayInputStream(bos.toByteArray());
bos.reset();
NoisyChannel channel = new NoisyChannel(ErrorFactory.createError(10, -3, 0));
channel.transfer(new BitStream(bis), new BitStream(bos));
bis = new ByteArrayInputStream(bos.toByteArray());
bos.reset();
// Repetition decoder
RepetitionDecoder repDecoder = RepetitionFactory.createRepetitionDecoder(5);
repDecoder.decode(new BitStream(bis), new BitStream(bos));
// Write
Files.write(outputFile.toPath(), bos.toByteArray());
}RepetitionCoder.encode
```
public void encode(BitStream in, BitStream out) {
try {
while (in.available() > 0) {
// Read one bit
int currentBit = in.getBits(1);
// Output that bit repetitions times
fo
Solution
Let's look at the big picture (no pun intended). If your goal is to transmit an image file over a noisy channel, then the thing to optimize is not the implementation of the code. As long as your code is not egregiously bad, the thing to worry about is the efficiency of the protocol, because the data transmission should be much slower than the disk I/O, which in turn should be much slower than your what your CPU can handle.
Your idea for improving reliability is to transmit each bit r times consecutively. This scheme has several drawbacks:
So, what to do?
First of all, you should decide whether you are interested in error detection or error correction. Error detection means adding enough redundancy to let the receiver know that some error occurred; the receiver could either report the failure or request retransmission. Error correction means adding enough redundancy to let the receiver automatically correct errors, as long as there aren't too many errors.
A simple error detection mechanism is to add a parity bit. For example, you could transmit the data in groups of 7 bits, then insert an eighth bit such that the sum of all eight bits is even. (This is called "even parity"). That would let you detect up to 1 bit-flip in every 7 bits, at a cost of 14% overhead. You can tune the parameters as appropriate.
Another error detection mechanism is to transmit a checksum for the file, or maybe a checksum for every kibibyte of data. CRC is a common class of checksum algorithms, but you could also use something like SHA-2.
If you want an error-correcting code, then pick a scheme from the list. Reed Solomon error correction is a common scheme. You can tune the t parameter to tolerate whatever proportion of bit errors you expect to encounter, and still be able to completely reconstruct the data.
Keep in mind that low-level protocols, such as modems, Ethernet, IP, and TCP, generally have some crude checksum mechanism built-in already, so whatever you implement would be an additional layer of insurance.
Your idea for improving reliability is to transmit each bit r times consecutively. This scheme has several drawbacks:
- It's inefficient. Obviously, r ≥ 2, which means that at the least, you double the transmission time. But doubling the data can only let the receiver detect that an error occurred. If you want to let the receiver automatically correct any detected errors without requesting retransmission, you need at least r ≥ 3, with r being odd, so that the receiver can go with a majority vote for each bit. Tripling or quintupling the transmission time is a very steep price to pay. (In your decoder, if r is even, then ties are biased towards 1, so you don't want to use even r.)
- Transmitting each bit consecutively r times is akin to lowering the bit rate by a factor of r, which is the same simple trick that would normally be done by hardware. (Well, not exactly, since the clock speeds remain the same, and frame signalling mechanisms don't get slowed down.) For example, Ethernet and Wi-Fi can autonegotiate the speed down, or you can configure the speed manually.
- If you transmit the image normally r times instead, then the same number of bits get transmitted, but the receiver might optimistically try to render the first full copy that it receives, then confirm and make corrections when it receives the subsequent copies. That could result in lower latency and a better user experience.
- A single glitch in the analog medium is likely to wipe out multiple consecutive bits. If the bits were interleaved temporally, then you wouldn't be putting your eggs in the same basket.
So, what to do?
First of all, you should decide whether you are interested in error detection or error correction. Error detection means adding enough redundancy to let the receiver know that some error occurred; the receiver could either report the failure or request retransmission. Error correction means adding enough redundancy to let the receiver automatically correct errors, as long as there aren't too many errors.
A simple error detection mechanism is to add a parity bit. For example, you could transmit the data in groups of 7 bits, then insert an eighth bit such that the sum of all eight bits is even. (This is called "even parity"). That would let you detect up to 1 bit-flip in every 7 bits, at a cost of 14% overhead. You can tune the parameters as appropriate.
Another error detection mechanism is to transmit a checksum for the file, or maybe a checksum for every kibibyte of data. CRC is a common class of checksum algorithms, but you could also use something like SHA-2.
If you want an error-correcting code, then pick a scheme from the list. Reed Solomon error correction is a common scheme. You can tune the t parameter to tolerate whatever proportion of bit errors you expect to encounter, and still be able to completely reconstruct the data.
Keep in mind that low-level protocols, such as modems, Ethernet, IP, and TCP, generally have some crude checksum mechanism built-in already, so whatever you implement would be an additional layer of insurance.
Context
StackExchange Code Review Q#114216, answer score: 4
Revisions (0)
No revisions yet.