Today began with a plenary talk by Dr. Ronald W. Schafer of HP Labs, "A Celebration of the Science and Technology of DSP" which I found to be pretty interesting. I noticed a few nodding off here and there, and Dr. Schafer isn't exactly the most engaging speaker, but the history of DSP was very informative. For example, I had no idea that the first VLSI implementation of a DSP process was in fact the Speak & Spell. Sort of the foreshadowing of DSP's influence in the multimedia/entertainment domain. In speaking of the future of DSP, Dr. Schafer mentioned "who knows, maybe Compressed Sensing will be the new FFT." We can hope, right? I sort of wanted to give a little shout, but I supressed the urge. I'm a total fanboy, its true.
Afterwards, I talked to Ahmad Akl, who was presenting a poster by his colleagues Veria Havary-Nassab and Sadiq Hassan under Dr. Valaee at the University of Toronto, "Compressive Detection for Wide Band Spectrum Sensing". This paper was pretty interesting, more for the novelty rather than complexity of its application. The basic idea is that you have some set of transmitters that want to communicate on a shared medium, in this case at some wireless frequency. Basically, the total medium bandwidth, W, is split into a set of N discrete communication channels. Traditionally, to know on which of the channels to transmit, each device would have to scan through the entirety of W, checking the current signal energy at each communication band. Using CS, the approach is altered by sampling the entire band and sending it through a bank of K filters, where K < N. This acts as the columns of a measurement matrix, \Phi. Ahmad wasn't sure about which method was used to reconstruct the signal, but assumed it to be something like the standard \l_1 BP.
The implication here is that the channel usage is in fact sparse on the whole. I asked about the situation in which the communication medium is in fact full, i.e. in the case of network congestion and Ahmad said "it wouldn't work." I can see this as being a potential negative to this process, but otherwise interesting. Ahmad will be presenting his work "Accelerometer-Based Gesture Recognition via Dynamic-Time Warping, Affinity Propagation, & Compressive Sensing" on Friday, so I'll try and stop by for his poster.
Also talked to Markus Flierl about his paper "A l1-Norm Preserving Motion Compensated Transform for Sparse Approximation of Image Sequences" which was a pretty heady work and I would be dishonest if I said that I followed all of it, but I'll try to convey the gist. He suggested to start with his previous work "A Motion-Compensated Orthogonal Transform with Energy Concentration Constraint". He said that the idea was "not practical" but more of an exercise in what is possible. I only started to begin to understand this while eating lunch at Chen's Kitchen, just down the street from the hotel. Sungkwang and were trying to think about what you could actually do with an l1 preserving transform, but we came up empty. I'm not really sure what this would be useful for, but I think that was Markus meant by "not practical". I asked him if he thought there was any confluence between these ideas and Compressive Sensing, but he was of the opinion that they would be unsuited due to CS's emphasis on linear projections into the measurement basis. I'm sure someone will find use for this, however!
Next, I spoke with Dimitri Milioris about his group's work "Multiple-Measurement Bayesian Compressed Sensing Using GSM Priors for DOA Estimation". Before today, I hadn't really ever heard of Arrival (DOA), but it seems to be pretty popular with the beamforming crowd. Also, Bayesian CS seems to have a really good showing at ICASSP this year. There are quite a few papers being presented that make use of this methodology. To be honest, I've kind of stayed far away from BCS because of some its complexity, and the simplicity of iterated methods just make sense to me, but the Bayesian approach seems very powerful in the noisy setting. I'll have to sit down and really get into it a bit more. Anyway, back to the paper at hand. Basically, in this work, they are trying to determine an angle in single degree precision (though I'm sure it could be upped fairly easily) that is the correct angle between the receiver and the source. There are 180 angles to choose from in this example. This is represented by a 180 element vector, consisting of 0's at the incorrect angles and a 1 at the correct angle. This is the target of the reconstruction. Also, their method makes use of multiple receivers to add a bit more informational power to the system. Dimitri said they found optimal performance using Gaussian Scale Mixtures due to their closeness to Dirac's, but with heavy tails.
Dimitri also tipped me off to a paper by his colleague, George Tazagkarakis "Bayesian Compressed Sensing Imaging Using a Gaussian Scale Mixture" which will be presented as a poster on Thursday. I talked to George a little bit about the complications of having all your techniques being restricted to the decoder side and the problematic nature of CS in that it becomes very non-intuitive to apply the power of modern DSP methods when you simply don't have the original data to work with (modelling, motion compensation, etc). He also told me that these papers were presented a little out of order, as the DOA paper is an extension of the imaging technique. I look forward to asking some questions about BCS with GSM on Thursday.
Unfortunately, I was not able to attend the lecture on "Using Reed-Muller Sequences as Deterministic Compressed Sensing Matrices for Image Reconstruction" or "Non-Convex Group Sparsity: Application to Color Imaging" as both of these took place during the Biomedical Imaging session which took place concurrent to the Scalable and Multiview Coding session I attended. However, it seems I would have been better served going to the Biomed sessions since 4/5 of the authors did not show up to the session I attended. Dr. Fowler was chairing the session, and to quote him it was "an embarrassment." He told us that perhaps one in ten authors might not attend, but four in five was kind of a new low. It seems that it is becoming more frequent that authors are submitting their papers to ICASSP/ICIP and paying the attendance fee, but never intending on actually showing up for their presentations in the hopes that the paper will slide under the door and still make it into the Proceedings. This gets kind of stale after a while and makes me wonder if the IEEE will begin actually enforcing its policies and become stricter on blacklisting. But, perhaps its just a condition of the economic times and it'll be better in the future. We'll have to see.
Thats all I have for now! I'll be making an update later tonight after attending the Video Motion Analysis session this evening which will feature two quite interesting papers...that is, if they come ;)