We recently had two papers accepted at the upcoming IEEE WoWMoM, one on loss sourcing in 802.11 (more info in a later post) and the other on USB flash drive performance characteristics with respect to read/write speeds and power consumption. The work itself was done in large part by one of my summer REU students which is very cool to see those results turn into a tangible research output.
The paper in itself was an outgrowth of discussions related to a DARPA WAND proposal from last year. While we weren't funded, we had proposed the usage of USB flash to provide a cheap and easy method for significant on-demand storage for our packet caching architecture. There were some discussions that the flash drives would be too expensive power-wise and we were at a loss to directly respond to that. Long story short, that led to the above REU project to pin down the energy costs and performance of the drives which would be a necessity if the grant got funded (it was not, unfortunately).
The net result was that our initial hypothesis was correct, i.e. the cost of the flash drive in terms of power was dwarfed by the cost of the wireless adapter itself, especially in a USB 1.1 setup that would have likely been in place. We were not entirely vindicated as we discovered that by in large, the flash drive itself would never enter a low power mode when not in use, i.e. the file system is in essence permanently mounted which in turn triggered periodic "Keep Alive" messages across the USB bus, never allowing the flash drive to enter suspend mode. The REU student did a nice job diving into the ugly innards of the Linux kernel USB module to hacking up a suspend API for some basic testing. While it did offer the option to manually force a suspend, the performance results that could certainly use some tweaking as it would be ill-suited for significant amounts of suspend/resume operations.
All in all, a neat project for a REU. We posted a Wiki form version of the original submission here if anyone would care to peruse it. The final camera version of the paper (accepted as a short paper) will be posted in the next month or so to the website and will be available on the same link. The USB flash profiling tool is also posted on-line available via the same link or the above direct link.
Finally, I am finally taking the plunge and starting a policy of putting reviews from accepted papers on-line when I can. The nice part is that it gives us a chance to do a minor rebuttal but also it gives some nice transparency to the review process which in my opinion is a very good thing. The review / response notes for the paper can be found here.
Showing posts with label Wireless networks. Show all posts
Showing posts with label Wireless networks. Show all posts
Friday, February 22, 2008
Tuesday, November 27, 2007
Weekly papers - back again
Finally, back with the weekly papers segment after a rough beginning of November. Perhaps it was dodging reactions from INFOCOM reviews and how it went with various folks. More on that later when I have time to do a length post.
Diversity and multiplexing: a fundamental tradeoff in multiple-antenna channels This paper comes from our weekly papers meeting two weeks ago from discussions regarding our INFOCOM reviews regarding the relevance of MIMO to our current work. Transactions on Information Theory is a bit out of our normal purview so kudos to Dave for taking the time to digest the paper in its entirety.
The paper looked at the tradeoffs in a multi-antenna environment with regards to reliability versus capacity. The most relevant portion of the paper is the strong dropoff with regards to either dimension, i.e. if you choose to do both, you will not get a solution that is strong in either dimension. Not exactly a shocking result but the work in the paper is quite sound and a nice discussion point to discuss why our current work on wireless reliability is very interesting.
Near as I can tell, industry has gone the route of capacity over reliability meaning that our results regarding channel reliability are especially apt. In short, our most recent work has been looking at if the reliability of the channel for nodes in close proximity. If loss is primarily from the medium, losses should be correlated in nearby spaces but not necessarily correlated across larger spaces. In contrast, if losses are not correlated in a tight area, it means that it is likely an individual device going crazy, not the medium itself. Most of the works in the literature regarding burstiness / etc. seem to trust that the device itself is good and that the packet got corrupted before arriving, not that the device itself may be a significant source of the packet errors.
While previous works such as SRD by Balakrishnan reached what would be a similar conclusion, the works reached their conclusion for quite different reasons. Put simply, the physical sensors were highly scattered (i.e. APs over 30 feet+ apart) allowing for multi-path effects for different loss probabilities. In contrast, we showed that losses tended to also show a lack of correlation in short distances regardless of orientation, small distance between, and heavy background traffic. Moreover, there were also "weird" periodicity aspects to some of the devices that bear further investigation (Intel Centrino chipset).
Diversity and multiplexing: a fundamental tradeoff in multiple-antenna channels This paper comes from our weekly papers meeting two weeks ago from discussions regarding our INFOCOM reviews regarding the relevance of MIMO to our current work. Transactions on Information Theory is a bit out of our normal purview so kudos to Dave for taking the time to digest the paper in its entirety.
The paper looked at the tradeoffs in a multi-antenna environment with regards to reliability versus capacity. The most relevant portion of the paper is the strong dropoff with regards to either dimension, i.e. if you choose to do both, you will not get a solution that is strong in either dimension. Not exactly a shocking result but the work in the paper is quite sound and a nice discussion point to discuss why our current work on wireless reliability is very interesting.
Near as I can tell, industry has gone the route of capacity over reliability meaning that our results regarding channel reliability are especially apt. In short, our most recent work has been looking at if the reliability of the channel for nodes in close proximity. If loss is primarily from the medium, losses should be correlated in nearby spaces but not necessarily correlated across larger spaces. In contrast, if losses are not correlated in a tight area, it means that it is likely an individual device going crazy, not the medium itself. Most of the works in the literature regarding burstiness / etc. seem to trust that the device itself is good and that the packet got corrupted before arriving, not that the device itself may be a significant source of the packet errors.
While previous works such as SRD by Balakrishnan reached what would be a similar conclusion, the works reached their conclusion for quite different reasons. Put simply, the physical sensors were highly scattered (i.e. APs over 30 feet+ apart) allowing for multi-path effects for different loss probabilities. In contrast, we showed that losses tended to also show a lack of correlation in short distances regardless of orientation, small distance between, and heavy background traffic. Moreover, there were also "weird" periodicity aspects to some of the devices that bear further investigation (Intel Centrino chipset).
Subscribe to:
Comments (Atom)
