Recently, there has been a wealth of effort devoted to the design of protocols for secure machine learning algorithms. In particular, much of this is aimed at ensuring predictions from highly-accurate deep neural network (NN) models are secure. However, as NNs are trained on data, a key question is how such models can be trained securely. The few prior works on secure NN training have focused either on designing custom protocols for existing training algorithms, or on developing tailored training algorithms and then applying generic secure protocols. In this work, we propose to simultaneously design training algorithms alongside a secure protocol for computing that algorithm, incorporating optimizations on both fronts. We present QUOTIENT, a new method for discretized training of deep neural networks designed to be evaluated in secure computation, along with a secure two-party protocol for it. QUOTIENT incorporates important components of state-of-the-art neural network training such as layer normalization and adaptive gradients. Compared to the state-of-the-art in secure two-party (2PC) neural network training, we obtain an improvement of 50X in time and 6% in accuracy. Additionally, our method is the first practical secure 2PC framework for neural network training over WAN.