2025 05 23updates
A bunch of updates after not so many updates:
-
The site has been updated with more content, including slides from some recent talks.
-
I was awarded the Presidential Outstanding Faculty Scholar Award by Rutgers. This is given to faculty getting promoted to Professor (from Associate). However, I have not received official notification for the promotion so I’ll just assume the award is a good sign (?)
- Our monograph is finally done!
- Dey et al, Codes for Adversaries: Between Worst-Case and Average-Case Jamming, Foundations and Trends in Communications and Information Theory, 2024. This is is the culmination of 15+ years of research we’ve been doing on adversarial channel models!
- Preprints!
- Vargas et al., Understanding Generative AI Content with Embedding Models, under review.
- Tao et al., Privacy-Preserving Visualization of Brain Functional Connectivity, about to be submitted.
- Banerjee et al., Measuring model variability using robust non-parametric testing, revision under review.
- Journal and conference papers!
- Tao and Sarwate, Differentially Private Distribution Estimation Using Functional Approximation, ICASSP 2025. We look at CDF approximation using functional approximation. A journal version is in the works.
- Wu et al., Learning to Help in Multi-Class Settings, ICLR 2025. We study a variation on learning with abstention to design effective “helpers” for resource-constrained devices needing to do ML/AI stuff.
- Tao et al., Federated Privacy-Preserving Visualization: A Vision Paper, IEEE BigData 2024. We’re looking at where and when private visualization might be useful.
- Sathyavageeswaran et al., Timely Offloading in Mobile Edge Cloud Systems, ITW 2024. We use an MDP framework to analyze computational offloading policies.
- Dey, et al., Computationally Efficient Codes for Strongly Dobrushin-Stambler Nonsymmetrizable Oblivious AVCs, ISIT 2024. We develop polynomial time codes for classes of adversarial channels!
- A. D. Sarwate, Machine learning with differential privacy, a chapter in the Handbook of Sharing Confidential Data Differential Privacy, Secure Multiparty Computation, and Synthetic Data from CRC. A limited survey of “classical” ML under differential privacy, more aimed at potential practitioners.
- Rootes-Murdy et al., Cortical similarities in psychiatric and mood disorders identified in federated VBM analysis via COINSTAC, Patterns, 2024. A use-case for COINSTAC, the federated learning platform for neuroimaging analysis on which I have been collaborating for the last 10 years.
Enjoy Reading This Article?
Here are some more articles you might like to read next: