An inexact framework for high-order adaptive regularization methods is presented, in which approximations may be used for the pth-order tensor, based on lower-order derivatives. Between each recalculation of the pth-order derivative approximation, a high-order secant equation can be used to update the pth-order tensor as proposed in (Welzel 2022) or the approximation can be kept constant in a lazy manner. When refreshing the pth-order tensor approximation after m steps, an exact evaluation of the tensor or a finite difference approximation can be used with an explicit discretization stepsize. For all the newly adaptive regularization variants, we retrieve standard complexity bound to reach a second-order stationary point. Discussions on the number of oracle calls for each introduced variant are also provided. When p = 2, we obtain a second-order method that uses quasi-Newton approximations with optimal number of iterations bound.