How Quasiconvexity works part7(Machine Learning)
<p>We prove homogenization for a class of viscous Hamilton-Jacobi equations in the stationary and ergodic setting in one space dimension. Our assumptions include most notably the following: the Hamiltonian is of the form G(p)+βV(x,ω), the function G is coercive and strictly quasiconvex, minG=0, β>0, the random potential V takes values in [0,1] with full support and it satisfies a hill condition that involves the diffusion coefficient. Our approach is based on showing that, for every direction outside of a bounded interval (θ1(β),θ2(β)), there is a unique sublinear corrector with certain properties. We obtain a formula for the effective Hamiltonian and deduce that it is coercive, identically equal to β on (θ1(β),θ2(β)), and strictly monotone elsewher</p>
<p>2. A parallel subgradient projection algorithm for quasiconvex equilibrium problems under the intersection of convex sets(arXiv)</p>
<p>Author : <a href="https://arxiv.org/search/?searchtype=author&query=Yen%2C+L+H" rel="noopener ugc nofollow" target="_blank">Le Hai Yen</a>, <a href="https://arxiv.org/search/?searchtype=author&query=Muu%2C+L+D" rel="noopener ugc nofollow" target="_blank">Le Dung Muu</a></p>
<p>Abstract : In this paper, we studied the equilibrium problem where the bi-function may be quasiconvex with respect to the second variable and the feasible set is the intersection of a finite number of convex sets. We propose a projection-algorithm, where the projection can be computed independently onto each component set. The convergence of the algorithm is investigated and numerical examples for a variational inequality problem involving affine fractional operator are provided to demonstrate the behavior of the algorithm</p>
<p><a href="https://medium.com/@monocosmo77/how-quasiconvexity-works-part7-machine-learning-502ffc0a0cc5">Website</a></p>