The backward error analysis is an important part of the perturbation theory and it is particularly useful for the study of the reliability of the numerical methods. We focus on the backward error for nonlinear eigenvalue problems. In this talk, the matrix-valued function is given as a linear combination of scalar functions multiplying matrix coefficients, and the perturbation is done on the coefficients. We provide theoretical results about the backward error of a set of approximate eigenpairs. Indeed, small backward errors for separate eigenpairs do not imply small backward errors for a set of approximate eigenpairs. In this talk, we provide inexpensive upper bounds, and a way to accurately compute the backward error by means of direct computations or through Riemannian optimization. We also discuss how the backward error can be determined when the matrix coefficients of the matrix-valued function have particular structures (such as symmetry, sparsity, or low-rank), and the perturbations are required to preserve them. For special cases (such as for symmetric coefficients), explicit and inexpensive formulas to compute the perturbed matrix coefficients are also given. This is a joint work with Leonardo Robol (University of Pisa).