Assisted reproductive technologies (ART) play a crucial role in conserving threatened wildlife species such as Bos gaurus. ART requires a large number of mature oocytes, and small antral follicles (SAFs) in the ovary are often used to obtain abundant sources of bovine oocytes. However, oocytes from SAFs often experience difficulty completing maturation and obtaining high quality and quantity of blastocyst formation compared to fully grown oocytes.
View Article and Find Full Text PDFAdv Neural Inf Process Syst
December 2019
Compressing word embeddings is important for deploying NLP models in memory-constrained settings. However, understanding what makes compressed embeddings perform well on downstream tasks is challenging-existing measures of compression quality often fail to distinguish between embeddings that perform well and those that do not. We thus propose the as a new measure.
View Article and Find Full Text PDFData augmentation, a technique in which a training set is expanded with class-preserving transformations, is ubiquitous in modern machine learning pipelines. In this paper, we seek to establish a theoretical framework for understanding data augmentation. We approach this from two directions: First, we provide a general model of augmentation as a Markov process, and show that kernels appear naturally with respect to this model, even when we do not employ kernel classification.
View Article and Find Full Text PDFProc Mach Learn Res
June 2019
Fast linear transforms are ubiquitous in machine learning, including the discrete Fourier transform, discrete cosine transform, and other structured transformations such as convolutions. All of these transforms can be represented by dense matrix-vector multiplication, yet each has a specialized and highly efficient (subquadratic) algorithm. We ask to what extent hand-crafting these algorithms and implementations is necessary, what structural priors they encode, and how much knowledge is required to automatically learn a fast algorithm for a provided structured transform.
View Article and Find Full Text PDFProc Mach Learn Res
April 2019
We investigate how to train kernel approximation methods that generalize well under a memory budget. Building on recent theoretical work, we define a measure of kernel approximation error which we find to be more predictive of the empirical generalization performance of kernel approximation methods than conventional metrics. An important consequence of this definition is that a kernel approximation matrix must be high rank to attain close approximation.
View Article and Find Full Text PDFAdv Neural Inf Process Syst
December 2018
The low displacement rank (LDR) framework for structured matrices represents a matrix through two displacement operators and a low-rank residual. Existing use of LDR matrices in deep learning has applied fixed displacement operators encoding forms of shift invariance akin to convolutions. We introduce a rich class of LDR matrices with more general displacement operators, and explicitly learn over both the operators and the low-rank component.
View Article and Find Full Text PDFKernel methods have recently attracted resurgent interest, showing performance competitive with deep neural networks in tasks such as speech recognition. The random Fourier features map is a technique commonly used to scale up kernel machines, but employing the randomized feature map means that () samples are required to achieve an approximation error of at most . We investigate some alternative schemes for constructing feature maps that are deterministic, rather than random, by approximating the kernel in the frequency domain using Gaussian quadrature.
View Article and Find Full Text PDF