In this talk, we are going to explore our recent paper that builds upon MARINA -- the current state-of-the-art distributed non-convex optimization method in terms of theoretical communication complexity. Theoretical superiority of this method can be largely attributed to two sources: the use of a carefully engineered biased stochastic gradient estimator, which leads to a reduction in the number of communication rounds, and the reliance on *independent* stochastic communication compression operators, which leads to a reduction in the number of transmitted bits within each communication round. In this paper we

i) extend the theory of MARINA to support a much wider class of potentially *correlated* compressors, extending the reach of the method beyond the classical independent compressors setting,

ii) show that a new quantity, for which we coin the name Hessian variance, allows us to significantly refine the original analysis of MARINA without any additional assumptions, and

iii) identify a special class of correlated compressors based on the idea of random permutations, for which we coin the term PermK. The use of this technique results in the strict improvement on the previous MARINA rate. In the low Hessian variance regime, the improvement can be as large as √n, when d > n, and 1 + √d/n, when n<=d, where n is the number of workers and d is the number of parameters describing the model we are learning.