advertisement


'Twisted light' carries 2.5 terabits of data per second

louballoo

Banned
A while back I posted some examples modern data transmission rates so that I could illustrate exactly how moronic "Audiophool" usb and optical cables are.

If you still think that there is such a thing as "Audiophile research departments", check out what the real scientists are doing with data transmission:

http://www.bbc.co.uk/news/science-environment-18551284


Researchers have clocked light beams made of "twisted" waves carrying 2.5 terabits of data - the capacity of more than 66 DVDs - per second.

The technique relies on manipulating what is known as the orbital angular momentum of the waves.

Recent work suggests that the trick could vastly boost the data-carrying capacity in wi-fi and optical fibres.

The striking demonstration of the approach, reported in Nature Photonics, is likely to lead to even higher rates.

Angular momentum is a slippery concept when applied to light, but an analogy closer to home is the Earth itself.

Our planet has "spin angular momentum" because it spins on its axis, and "orbital angular momentum" because it is also revolving around the Sun.

Light can have both these types, but the spin version is the far more familiar - as what is commonly called polarisation, or the direction along which light waves wiggle. Polarising sunglasses and many 3D glasses work by passing one polarisation and not another.

In many data-carrying applications involving light, more data is packed on to light waves by encoding one polarisation with one data stream, and another with a different stream.

That means twice as much information can fit within the same "bandwidth" - the range of colours that the transmitting equipment is able to process.
Twisted mission

Orbital angular momentum, or OAM, on the other hand, has only recently come to the fore as a promising means to accomplish the same trick.

The idea is not to create light waves wiggling in different directions but rather with different amounts of twist, like screws with different numbers of threads.

Most recently, Bo Thide of the Swedish Institute of Space Physics and a team of colleagues in Italy demonstrated the principle by sending beams made up of two different OAM states across a canal in Venice, an experiment they described in the New Journal of Physics.
"Twisted light" visualisation Eight beams, each with its own "twist", were prepared for the data-rate test

Most data traffic in optical fibres around the world is made up of different data streams on slightly different colours of light, which are split into their constituent colours at the receiving end in a technique called multiplexing.

To fully realise OAM's potential, similar multiplexing of different "twists" must be developed.

Alan Willner and his team at the University of Southern California, along with colleagues at Nasa's Jet Propulsion Laboratory and Tel Aviv University, have now demonstrated one way to do that.

The team prepared two sets of four light beams, each with a set level of OAM twist, and each of the eight containing its own data stream.

The two sets were then filtered to have different polarisations, and arranged into a single beam with four streams at the centre and four in a doughnut-shape around the edge.

At the receiving end, the process is undone and the single beam was unpacked to yield its eight constituent beams, together carrying about 2.5 terabits per second.

Initial experiments were only carried out over a distance of about a metre, and Prof Willner said that challenges remained for adapting the approach to fibres or for longer-distance transfer.

"One of the challenges in this respect is turbulence in the atmosphere," he explained.

"For situations that require high capacity... over relatively short distances of less than 1km, this approach could be appealing. Of course, there are also opportunities for long-distance satellite-to-satellite communications in space, where turbulence is not an issue."

Commenting on the work in an accompanying article in Nature Photonics, Juan Torres of the Institute of Photonic Sciences in Barcelona wrote that it "contributes a new chapter to the long history of telecommunications by demonstrating the potential of OAM... for increasing the transmission capacity".

However, he said that for wider application, a number of robust tools would be needed to manipulate OAM states and to create and deliver beams made up of several of them.

"The true impact of this development in the telecommunications industry will depend on how several important issues... are addressed and solved," he wrote.
 
I though blu-ray discs would be the new standard of data quantity. DVDs are so 20th century now.
 
Seems like the next development from DWDM by adding multiple polarization to each optical wavelength.

I worked with DWDM years ago doing simple things like connecting data centres together and put in some of the first of these optical networks in the UK. Shame I'll be too old for this new technology.

Cheers,

DV
 
Decoding FLAC takes time and hence introduces latency. In particular circumstances this really does mean that WAV "sound better" than FLAC. If you are building a stand-alone device specifically for replay, there are far mroe technical hurdles to overcome in getting a decent sound out of FLAC; you have to in some way decode it first which means using a processor clocked seperately from the DAC. Whereas WAV files just need transporting to the DAC.

It would be nice if the "everything sounds the same bunch" would stop assuming that because _they_ don't understand the science involved the rest of don't/can't/won't/whatever. Grrrr ¬_¬
 
It would be nice if the "everything sounds the same bunch" would stop assuming that because _they_ don't understand the science involved the rest of don't/can't/won't/whatever. Grrrr ¬_¬

And ever so nice if certain types stopped assuming that because something could happen it always will, or does.
 
Decoding FLAC takes time and hence introduces latency. In particular circumstances this really does mean that WAV "sound better" than FLAC. If you are building a stand-alone device specifically for replay, there are far mroe technical hurdles to overcome in getting a decent sound out of FLAC; you have to in some way decode it first which means using a processor clocked seperately from the DAC. Whereas WAV files just need transporting to the DAC.

It would be nice if the "everything sounds the same bunch" would stop assuming that because _they_ don't understand the science involved the rest of don't/can't/won't/whatever. Grrrr ¬_¬

It is not a case of everything sounds the same (I don't believe that, my ears and technical knowledge support that stance), but it is the case that a correctly decoded flac is the same (bit for bit) as the wav. From that point the bits/bytes are clocked through the system in the exactly same way regardless - dare I mention the buffer word ?

Finally - what if there are people who really do understand the (computer) science.....

My final word as it is time for tea.
 
Wav files just need transporting to the dac.... Really, dacs work directly on
wav files do they, do they hell.
 
Woah thanks for the responses - hi everyone.

granham-r - My original point was just that decoding FLAC introduces audible latency, which it does, that and when mixing/performing/recording that might cause issue for people. But on a side note I absolutely agree that the outputs from a decoded FLAC and WAV are identical of course. However, and although this was not my original point, in cheaper PCs (the PCs most people are using when having these discussions) high CPU usage can and often does cause noise on the audio output, and if the CPU is working hard to decode FLAC while keeping buffer-sizes down, then noise very likely will be injected onto the audio output and WAV will sound better than FLAC.

Robert - I'm not sure if in your comment you were meaning to say that I had made such an assumption but decoding information of any sort does always introduce latency.

Good topic by the way - sorry to half hijack it :S
 
Stugeek

Obviously my original comment was tongue-in-cheek, my usual response to such comments as yours is if it works for you then it is true, for others, it may not be.

What I would say is that you underestimate the power of these 'cheaper PC's' - even an SBT has more than enough power to do the job, whether it be to decode a flac (or wav...) or to build a data stream from the ethernet or wireless connection.

Anyway, no more thread hijacking - there is a huge thread dedicated to this somewhere.
 
Just as a little thought experiment, imagine that the explanation in the OP had been posted on the website of a hifi cable manufacturer.

Then take this bit:

Initial experiments were only carried out over a distance of about a metre, and Prof Willner said that challenges remained for adapting the approach to fibres or for longer-distance transfer.

"One of the challenges in this respect is turbulence in the atmosphere," he explained.

Now, I can just hear the hordes of anti-foo fighters who would, to a man, be saying:

That's just utter garbage. Do they take us for complete fools? Light isn't affected by atmospheric turbulence. What a load of old BS.

Or is it just me? ;)
 
Woah thanks for the responses - hi everyone.

granham-r - My original point was just that decoding FLAC introduces audible latency, which it does, that and when mixing/performing/recording that might cause issue for people. But on a side note I absolutely agree that the outputs from a decoded FLAC and WAV are identical of course. However, and although this was not my original point, in cheaper PCs (the PCs most people are using when having these discussions) high CPU usage can and often does cause noise on the audio output, and if the CPU is working hard to decode FLAC while keeping buffer-sizes down, then noise very likely will be injected onto the audio output and WAV will sound better than FLAC.

Robert - I'm not sure if in your comment you were meaning to say that I had made such an assumption but decoding information of any sort does always introduce latency.

Good topic by the way - sorry to half hijack it :S

in some of the recordings I have, the latency is several decades between the bits originally being encoded and me hearing decoded waveform. Now, what is your problem with latency?
 
Any PC from the past 15 years will not work even vaguely 'hard' to decode flac :p Easier than MP3 iirc.


If you want to check for audible noise with high cpu usage, run something called intelburntest, the program will work your CPU to the max :1980s: and also create a huge amount of heat, potentially breaking your PC if it's a bit rickety.


That's what I used to debunk the claims people made that wav sounds better because of less cpu usage, I couldn't hear an audible difference between normal running (foobar shows at 00% cpu usage, so less than 1%, not counting other services) and 100% CPU torture. Hmmmmm....
 
I'm sure there will be a requirement for an audiophile grade version. These IT types just don't understand the importance of such critical realities.
 


advertisement


Back
Top