An explicit algorithm for solving monotone variational inequalities
Introduction
Let C be a nonempty, closed and convex subset of a real Hilbert space H and be a given mapping. The classical variational inequality (VI) problem [13] consists of finding a point such that.
Observe that by including the indicator function of the set C, the VI (1) can be written as the following unconstrained problem:
In case that is replaced by a general convex function , the so-called generalized variational inequality problem is obtained, that is, find such that From here on we denote the solution set of (3) by Ω and assume it is nonempty.
Variational inequalities raise naturally in many theoretical and applied fields, such as generalization of boundary value problem, mechanics, physics, economy and control theory, for further examples see e.g., [4], [12], [18], [19], [27], [32], [33].
There exists various equivalent reformulations of (3), in particular it can be remodeled as the following inclusion problem.
Regarding methods for solving (4) and more general nonlinear problems, one of the popular method is the proximal point method that was introduced by Martinet [24] and extended by Rockafellar [29] and many others, see for example [7], [28]. Recall that for a given convex function and , the proximal operator of f is defined for as: Another related operator is the resolvent operator of a maximal monotone operator that is defined as , where I denotes the identity mapping on H.
Observe that the proximal and the resolvent operators are linked via the following. Let be a given function and ∂f is its subdifferential, then .
Proximal operators are used commonly as a inner-step in some algorithms, for example in splitting methods. These methods decompose a given operator into the sum of maximal monotone operators, so we obtain an inclusion problem of the general form (4). Then, the resolvent/proximal evaluations become easier than the original evaluations with the given operator.
With respect to (3) we recall the forward-backward splitting method introduced in [21]. where is chosen arbitrary and . The operator is called forward operator and the proximal is called the backward operator.
The proximal point method is an iterative procedure in which the update of the next iterate depends only on the previous one . Alvarez and Attouch in [1] studied the “heavy ball with friction” dynamical system and introduced a new inertial technique rule that uses two previous iterations in order to accelerate the proximal point method and other related schemes.
For finding zeros of a maximal monotone set-valued mapping , Alvarez and Attouch in [1] studied the following second-order ordinary differential system
By using the following discretization
So, (6) translates to the so-called inertial proximal point method.
Moudafi and Oliny [26] extended the above by including another single-valued operator B to T. Following the above, they proposed to study a new splitting procedure in which the next iterate is based on finding solution x for Clearly when , the motivation is (6).
Following this, we aim for a new inertial method in which B is evaluated at a point that depends on two previous iterations, that is, the next iterate is based on finding solution x for where θ and are chosen according to some update rule that will be defined and explained in the Section 3.
Thank to Prof. Hedy Attouch's wise advise, we studied [20, Algorithm 1] and its connection to (10). So first, if one chooses in [20, Algorithm 1], then (10) is obtained as special case, but in [20] the inclusion problem involves the gradient of a convex function, so it is co-coercive and we assume monotonicity that is weaker assumption. Moreover, we consider other parameters θ and .
Next we also recall three recent inertial-type methods that are relevant to our proposed scheme. We start with Malitsky [22] reflected projected (proximal) gradient method, that is designed for solving VIs (1) and its iterative step with constant step size was presented next where and are arbitrary, , where L is the Lipschitz constant of A.
Another related method is the so-called Mann-Tseng method introduced in [14] where and is chosen arbitrary.
The last method we look at is the Golden Ratio Algorithm (GRAAL) that is presented in [23]. where , and .
Thus, based on the above, we wish to study monotone variational inequalities of the form (3) and propose a new inertial, self adaptive proximal point algorithm in real Hilbert spaces. Our method requires only one evaluation of the proximal operator of g and one evaluation of A per iteration and the new adaptive step-size rule also enables to avoid the prior knowledge of the Lipschitz constant of A.
Our results generalize some recent theoretical results in the literature as well as demonstrate good practical behavior. We present full convergence analysis including R-linear convergence rate under strong monotonicity. Extensive numerical experiments for synthetic and real-world problems confirm the efficiency and applicability of the proposed method.
The paper is organized as follows. We first recall some basic definitions and results in Section 2. Our proposed methods are presented and analyzed in Section 3. Section 4 presents the weak convergence and the linear convergence of the proposed method. In Section 5 we present some numerical experiments which demonstrate performances of the algorithms as well as provide a preliminary computational overview by comparing it with some related algorithms.
Section snippets
Preliminaries
Let H be a real Hilbert space and C be a nonempty, closed and convex subset of H. The weak convergence of to x is denoted by as , while the strong convergence of to x is written as as . For each and , we have
We recall some concepts of linear convergence in the following definition.
Definition 1 A sequence in H is said to converge R-linearly to with rate if there is a constant such that
The algorithm
In this section we introduce a fully explicit algorithm in the sense of Malitsky [23], i.e., the algorithm does not require knowledge of the Lipschitz constant L nor a linesearch procedure. For simplicity, we adopt the convention .
Algorithm 1
Before we establish the convergence analysis of Algorithm 1, we wish to discuss some of its important properties and special cases. Observe that from (17) we can deliver a positive lower bound of the stepsizes Indeed, it is clear
Weak convergence
We start the convergence analysis by proving some key inequalities. For all , let where is any solution of (3) and for all . Since , it is clear that and
The following proposition will play the key role in our convergence analysis.
Proposition 1 Let the mapping be monotone
Numerical experiments
In this section, we consider several numerical examples for testing and comparing the performances of our scheme. All experiments are performed under Window 10 premium and MATLAB 2019b running on a Lenovo laptop with an Intel core CPU i7 at 1.8 GHz and 16 GB memory.
Conclusions
In this paper, we present a new proximal method for solving variational inequalities in real Hilbert spaces. The convergence of proposed algorithm is obtained under monotonicity and Lipschitz continuity of the VI associated mapping A. The proposed method use inertial effect and does not require the knowledge of the Lipschitz constant. These two properties emphasize the applicability and advantages over several existing result in the literature. Extensive numerical experiments illustrate the
Acknowledgement
The authors thank Prof. Hedy Attouch, Juan Peypouquet and Yura Malisky for independent fruitful discussions and insights providing while preparing this work. They are grateful to three anonymous referees for their carefully reading and comments, which help to improve the presentation of the paper. This research is funded by National Economics University, Hanoi, Vietnam.
References (35)
- et al.
AIR tools — a MATLAB package of Iterative Reconstruction Methods
J. Comput. Appl. Math.
(2012) - et al.
Convergence of a splitting inertial proximal method for monotone operators
J. Comput. Appl. Math.
(2003) - et al.
An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping
Set-Valued Anal.
(2001) - et al.
Convergence rates of inertial forward-backward algorithms
SIAM J. Optim.
(2018) - et al.
The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than
SIAM J. Optim.
(2016) - et al.
Applied Nonlinear Analysis
(1984) - et al.
Convex Analysis and Monotone Operator Theory in Hilbert Spaces
(2011) - et al.
A fast iterative shrinkage-thresholding algorithm for linear inverse problem
SIAM J. Imaging Sci.
(2009) - et al.
Nonexpansive projections and resolvents of accretive operators in Banach spaces
Houst. J. Math.
(1977) - et al.
Exact matrix completion via convex optimization
Found. Comput. Math.
(2009)
Iterative Methods for Fixed Point Problems in Hilbert Spaces
On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints
Comput. Optim. Appl.
Parallel Optimization: Theory, Algorithms, and Applications
Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer Series in Operations Research, vols. I and II
Sul problema elastostatico di Signorini con ambigue condizioni al contorno
Atti Accad. Naz. Lincei, VIII. Ser., Rend., Cl. Sci. Fis. Mat. Nat.
Tseng type methods for solving inclusion problems and its applications
Calcolo
DC-programming versus -superiorization for discrete tomography
An. Ştiinţ. Univ. ‘Ovidius’ Constanţa, Ser. Mat.
Cited by (8)
A generalized proximal point algorithm with new step size update for solving monotone variational inequalities in real Hilbert spaces
2024, Journal of Computational and Applied MathematicsSimple proximal-type algorithms for equilibrium problems
2024, Journal of Global OptimizationA modified generalized version of projected reflected gradient method in Hilbert spaces
2024, Numerical AlgorithmsModified Subgradient Extragradient Algorithms with A New Line-Search Rule for Variational Inequalities
2023, Bulletin of the Malaysian Mathematical Sciences SocietySelf Adaptive Iterative Algorithm for Solving Variational Inequality Problems and Fixed Point Problems in Hilbert Spaces
2023, Acta Applicandae MathematicaeAn inertial method for a solution of split equality of monotone inclusion and the f-fixed point problems in Banach spaces
2023, Mathematical Methods in the Applied Sciences