Elsevier

Applied Numerical Mathematics

Volume 171, January 2022, Pages 408-425
Applied Numerical Mathematics

An explicit algorithm for solving monotone variational inequalities

Dedicated to Professor Dinh Nho Hao on the occasion of his 60th birthday.
https://doi.org/10.1016/j.apnum.2021.09.013Get rights and content

Abstract

In this paper, we are interested in the generalized variational inequality problem in real Hilbert spaces. We propose an explicit proximal method which requires only one proximal step and one mapping evaluation per iteration and also uses an adaptive step-size rule that enables to avoid the prior knowledge of the Lipschitz constant of the involved mapping. Weak convergence of the proposed scheme is established under standard assumptions. Under strong monotonicity, we present the R-linear convergence rate of our new method. Intensive numerical experiments illustrate the advantages and the applicability of our scheme. Moreover, our work generalizes theoretically several recent results in this field.

Introduction

Let C be a nonempty, closed and convex subset of a real Hilbert space H and A:HH be a given mapping. The classical variational inequality (VI) problem [13] consists of finding a point xC such that.Ax,xx0xC.

Observe that by including the indicator function δC of the set C, the VI (1) can be written as the following unconstrained problem:Ax,xx+δC(x)δC(x)0xH.

In case that δC is replaced by a general convex function g:HR, the so-called generalized variational inequality problem is obtained, that is, find xC such thatAx,xx+g(x)g(x)0xH. From here on we denote the solution set of (3) by Ω and assume it is nonempty.

Variational inequalities raise naturally in many theoretical and applied fields, such as generalization of boundary value problem, mechanics, physics, economy and control theory, for further examples see e.g., [4], [12], [18], [19], [27], [32], [33].

There exists various equivalent reformulations of (3), in particular it can be remodeled as the following inclusion problem.Find xH such that 0(A+g)x.

Regarding methods for solving (4) and more general nonlinear problems, one of the popular method is the proximal point method that was introduced by Martinet [24] and extended by Rockafellar [29] and many others, see for example [7], [28]. Recall that for a given convex function f:HR and λ>0, the proximal operator of f is defined for vH as:proxλf(v):=argminxH{f(x)+12λxv2}. Another related operator is the resolvent operator of a maximal monotone operator T:H2H that is defined as (I+λT)1, where I denotes the identity mapping on H.

Observe that the proximal and the resolvent operators are linked via the following. Let f:HR be a given function and ∂f is its subdifferential, then proxλf=(I+λf)1.

Proximal operators are used commonly as a inner-step in some algorithms, for example in splitting methods. These methods decompose a given operator into the sum of maximal monotone operators, so we obtain an inclusion problem of the general form (4). Then, the resolvent/proximal evaluations become easier than the original evaluations with the given operator.

With respect to (3) we recall the forward-backward splitting method introduced in [21].xn+1=proxλg(xnλAxn), where x0H is chosen arbitrary and λ>0. The operator (IλA) is called forward operator and the proximal is called the backward operator.

The proximal point method is an iterative procedure in which the update of the next iterate xn+1 depends only on the previous one xn. Alvarez and Attouch in [1] studied the “heavy ball with friction” dynamical system and introduced a new inertial technique rule that uses two previous iterations xn,xn1 in order to accelerate the proximal point method and other related schemes.

For finding zeros of a maximal monotone set-valued mapping T:H2H, Alvarez and Attouch in [1] studied the following second-order ordinary differential system0x¨(t)+γ(t)x˙(t)+T(x(t)).

By using the following discretizationx˙(t)xnxn1hn,x¨(t)=xn+12xn+xn1hn2.

So, (6) translates to the so-called inertial proximal point method.{yn=xn+θn(xnxn1)xn+1=(I+λT)1(yn)

Moudafi and Oliny [26] extended the above by including another single-valued operator B to T. Following the above, they proposed to study a new splitting procedure in which the next iterate xn+1 is based on finding solution x for0λnT(x)+xxnθn(xnxn1)+λnB(xn). Clearly when B=0, the motivation is (6).

Following this, we aim for a new inertial method in which B is evaluated at a point yn that depends on two previous iterations, that is, the next iterate xn+1 is based on finding solution x for{yn=xn+θ(xnxn1)0λnT(x)+xxn+λnB(yn) where θ and λn are chosen according to some update rule that will be defined and explained in the Section 3.

Thank to Prof. Hedy Attouch's wise advise, we studied [20, Algorithm 1] and its connection to (10). So first, if one chooses an=0 in [20, Algorithm 1], then (10) is obtained as special case, but in [20] the inclusion problem involves the gradient of a convex function, so it is co-coercive and we assume monotonicity that is weaker assumption. Moreover, we consider other parameters θ and λn.

Next we also recall three recent inertial-type methods that are relevant to our proposed scheme. We start with Malitsky [22] reflected projected (proximal) gradient method, that is designed for solving VIs (1) and its iterative step with constant step size was presented nextxn+1=PC(xnλA(2xnxn1)), where and x0,x1H are arbitrary, λ(0,21L), where L is the Lipschitz constant of A.

Another related method is the so-called Mann-Tseng method introduced in [14]{λn=min{μxn1yn1Axn1Ayn1,λn1},yn=proxλng(xnλAxn),zn=ynλn(AynAxn),xn+1=(1αnβn)xn+βnzn, where λ0>0,μ(0,1) and x0H is chosen arbitrary.

The last method we look at is the Golden Ratio Algorithm (GRAAL) that is presented in [23].{λn=min{109λn1,916λn2xnxn12AxnAxn12,λ},yn=xn2yn13,xn+1=proxλng(ynλnAxn), where z0,z0=z1, λ0=λ1>0 and λ>0.

Thus, based on the above, we wish to study monotone variational inequalities of the form (3) and propose a new inertial, self adaptive proximal point algorithm in real Hilbert spaces. Our method requires only one evaluation of the proximal operator of g and one evaluation of A per iteration and the new adaptive step-size rule also enables to avoid the prior knowledge of the Lipschitz constant of A.

Our results generalize some recent theoretical results in the literature as well as demonstrate good practical behavior. We present full convergence analysis including R-linear convergence rate under strong monotonicity. Extensive numerical experiments for synthetic and real-world problems confirm the efficiency and applicability of the proposed method.

The paper is organized as follows. We first recall some basic definitions and results in Section 2. Our proposed methods are presented and analyzed in Section 3. Section 4 presents the weak convergence and the linear convergence of the proposed method. In Section 5 we present some numerical experiments which demonstrate performances of the algorithms as well as provide a preliminary computational overview by comparing it with some related algorithms.

Section snippets

Preliminaries

Let H be a real Hilbert space and C be a nonempty, closed and convex subset of H. The weak convergence of {xn} to x is denoted by xnx as n, while the strong convergence of {xn} to x is written as xnx as n. For each x,yH and αR, we havex+y2x2+2y,x+y.αx+(1α)y2=αx2+(1α)y2α(1α)xy2.

We recall some concepts of linear convergence in the following definition.

Definition 1

A sequence {xn} in H is said to converge R-linearly to x with rate ρ[0,1) if there is a constant c>0 such thatxnx

The algorithm

In this section we introduce a fully explicit algorithm in the sense of Malitsky [23], i.e., the algorithm does not require knowledge of the Lipschitz constant L nor a linesearch procedure. For simplicity, we adopt the convention 00=+.

Algorithm 1

Before we establish the convergence analysis of Algorithm 1, we wish to discuss some of its important properties and special cases. Observe that from (17) we can deliver a positive lower bound of the stepsizes λnλnmin{λ1,22μL}>0n1. Indeed, it is clear

Weak convergence

We start the convergence analysis by proving some key inequalities. For all n1, letαn:=max{(4+22)μλnλn+1,4μλnλn+1+22μλn+1λn+2}.an:=xnx2+22μλnλn+1xnyn12+2λn1θφ(x,xn1), where x is any solution of (3) andφ(x,y):=Ax,yx+g(y)g(x) for all x,yH.bn:=(1λn2θλn1αn2)xn+1xn2. Since limnλn=λ, it is clear thatlimnαn=(4+22)μ>0, andlimn(1λn2θλn1αn2)=112θ(2+2)μ>0.

The following proposition will play the key role in our convergence analysis.

Proposition 1

Let the mapping A:HH be monotone

Numerical experiments

In this section, we consider several numerical examples for testing and comparing the performances of our scheme. All experiments are performed under Window 10 premium and MATLAB 2019b running on a Lenovo laptop with an Intel core CPU i7 at 1.8 GHz and 16 GB memory.

Conclusions

In this paper, we present a new proximal method for solving variational inequalities in real Hilbert spaces. The convergence of proposed algorithm is obtained under monotonicity and Lipschitz continuity of the VI associated mapping A. The proposed method use inertial effect and does not require the knowledge of the Lipschitz constant. These two properties emphasize the applicability and advantages over several existing result in the literature. Extensive numerical experiments illustrate the

Acknowledgement

The authors thank Prof. Hedy Attouch, Juan Peypouquet and Yura Malisky for independent fruitful discussions and insights providing while preparing this work. They are grateful to three anonymous referees for their carefully reading and comments, which help to improve the presentation of the paper. This research is funded by National Economics University, Hanoi, Vietnam.

References (35)

  • P.C. Hansen et al.

    AIR tools — a MATLAB package of Iterative Reconstruction Methods

    J. Comput. Appl. Math.

    (2012)
  • A. Moudafi et al.

    Convergence of a splitting inertial proximal method for monotone operators

    J. Comput. Appl. Math.

    (2003)
  • F. Alvarez et al.

    An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping

    Set-Valued Anal.

    (2001)
  • H. Attouch et al.

    Convergence rates of inertial forward-backward algorithms

    SIAM J. Optim.

    (2018)
  • H. Attouch et al.

    The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than 1/k2

    SIAM J. Optim.

    (2016)
  • J.P. Aubin et al.

    Applied Nonlinear Analysis

    (1984)
  • H.H. Bauschke et al.

    Convex Analysis and Monotone Operator Theory in Hilbert Spaces

    (2011)
  • A. Beck et al.

    A fast iterative shrinkage-thresholding algorithm for linear inverse problem

    SIAM J. Imaging Sci.

    (2009)
  • R.E. Bruck et al.

    Nonexpansive projections and resolvents of accretive operators in Banach spaces

    Houst. J. Math.

    (1977)
  • E.J. Candés et al.

    Exact matrix completion via convex optimization

    Found. Comput. Math.

    (2009)
  • A. Cegielski

    Iterative Methods for Fixed Point Problems in Hilbert Spaces

    (2012)
  • Y. Censor et al.

    On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints

    Comput. Optim. Appl.

    (2012)
  • Y. Censor et al.

    Parallel Optimization: Theory, Algorithms, and Applications

    (1997)
  • F. Facchinei et al.

    Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer Series in Operations Research, vols. I and II

    (2003)
  • G. Fichera

    Sul problema elastostatico di Signorini con ambigue condizioni al contorno

    Atti Accad. Naz. Lincei, VIII. Ser., Rend., Cl. Sci. Fis. Mat. Nat.

    (1963)
  • A. Gibali et al.

    Tseng type methods for solving inclusion problems and its applications

    Calcolo

    (2018)
  • A. Gibali et al.

    DC-programming versus l0-superiorization for discrete tomography

    An. Ştiinţ. Univ. ‘Ovidius’ Constanţa, Ser. Mat.

    (2018)
  • Cited by (8)

    View all citing articles on Scopus
    View full text