About me
Hi, I’m Mikhail Okunev, 4th year PhD candidate in Brown Visual Computing Lab advised by James Tompkin. I’m broadly interested in 3D reconstruction, inverse rendering and dynamic scene representations, especially with a monocular camera.
I joined academia pretty late in life. In the past I had a career as a research/machine learning engineer in Meta Reality Labs, Meta spam detection team, Microsoft Bing and a Silicon Valley startup Instrumental. I’ve been working on a broad range of topics including lighting estimation, foveated rendering, video superresolution, automatic visual anomaly detection, spam & fraud detection, ranking, and etc.
In my free time I enjoy brewing coffee, lifting weights, and playing piano.
Publications
F-TöRF: Flowed Time of Flight Radiance Fields Mikhail Okunev*, Marc Mapeke*, Benjamin Attal, Christian Richardt, Matthew O'Toole, James Tompkin tl;dr: C-ToF depth cameras can't reconstruct dynamic objects well. We fix that with our NeRF model that takes raw ToF signal and reconstructs motion along with the depth. All with a static monocular camera! ECCV 2024 [Paper][Project Page] |
Monocular Dynamic Gaussian Splatting is Fast and Brittle but Smooth Motion Helps Yiqing Liang, Mikhail Okunev, Mikaela Angelina Uy, Runfeng Li, Leonidas J. Guibas, James Tompkin, Adam Harley tl;dr: We test various SOTA Gaussian Splatting methods for dynamic monocular reconstruction and look at their performance on popular benchmarks. In submission [Paper][Project Page] |
Spatiotemporally Consistent HDR Indoor Lighting Estimation Zhengqin Li, Li Yu, Mikhail Okunev, Manmohan Chandraker, Zhao Dong tl;dr: Spatially and temporally consistent HDR lighting from videos ACM ToG, Presented in SIGGRAPH Asia 2023 [Paper][Video][Project Page] |
DeepFovea: Neural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos Anton Kaplanyan, Anton Sochenov, Thomas Leimkühler, Mikhail Okunev, Todd Goodall, Gizem Rufo tl;dr: We made a foveated rendering system that allows us to render only ~10% of the pixels, while inpainting the rest with a neural network. SIGGRAPH Asia 2019 [Paper][Project Page] |