Neural networks can mediate between download size and quality, according to researcher

The BONES system will make community requests err on the aspect of smallness and upscale the distinction via a neural community working on the receiving {hardware}. Credit: Jacob Chakareski et al.

Utility knowledge necessities vs. out there community bandwidth have been the continuing Battle of the Info Age, however now it seems that a truce is inside attain, primarily based on new analysis from NJIT Affiliate Professor Jacob Chakareski.

Chakareski and his crew, collaborating with friends from the University of Massachusetts-Amherst, devised a system to make community requests err on the aspect of smallness and upscale the distinction via a neural network working on the receiving {hardware}.

They name it BONES—Buffer Occupancy-based Neural-Enhanced Streaming—which will likely be offered on the ACM Sigmetrics convention in Venice, Italy this summer time, the place solely about 10% of submitted papers are accepted.

“Accessing high-quality video content can be challenging due to insufficient and unstable network bandwidth … neural enhancement has shown promising results in improving the quality of degraded videos through deep learning,” they said.

Using a mathematical function referred to as a Lyapunov optimization, “Our comprehensive experimental results indicate that BONES increases quality-of-experience by 4% to 13% over state-of-the-art algorithms, demonstrating its potential to enhance the video streaming experience for users.”

“People have thought about this before. But this is the first work where this is mathematically characterized and made sure that it fits within the latency constraints. People have talked of this idea of super-resolving data,” Chakareski elaborated. “The client carries out rate scheduling and computation scheduling decisions together. It is key to the approach. This has not been done before.”

“We have a prototype that we built, so the results that are shown in the paper are based on the prototype. And it runs really well. The results are equally as good as those that we were able to observe through simulations,” he mentioned. The crew can be sharing its code and knowledge in public.

A proof-of-concept software is within the works. The BONES crew is working with the University of Illinois Urbana-Champaign on a mixed-reality undertaking known as MiVirtualSeat: Semantics-Conscious Content material Distribution for Immersive Assembly Environments, which faces the community challenges that BONES addresses.

Chakareski mentioned he is hopeful that in style video conferencing companies can also undertake the tactic. “I think there will be a push for that because neural computation is becoming something. You hear a lot about machine learning in different domains, and this could be one more application where it could be used. We haven’t thought about commercializing the technology, but this is certainly something that one could pursue, and we may pursue.”

“There is this continuous race between the quality of the content and the capabilities of the network. As long as they both go side-by-side, this will always be an issue.”

The analysis is published on the arXiv preprint server.

Extra info:
Lingdong Wang et al, BONES: Close to-Optimum Neural-Enhanced Video Streaming, arXiv (2023). DOI: 10.48550/arxiv.2310.09920

Journal info:

Neural networks can mediate between obtain dimension and high quality, in keeping with researcher (2024, April 22)
retrieved 26 April 2024

This doc is topic to copyright. Other than any honest dealing for the aim of personal research or analysis, no
half could also be reproduced with out the written permission. The content material is supplied for info functions solely.

Click Here To Join Our Telegram Channel

Source link

When you have any issues or complaints relating to this text, please tell us and the article will likely be eliminated quickly. 

Raise A Concern

Show More

Related Articles

Back to top button