We introduce a novel approach to global optimization via continuous-time dynamic programming and Hamilton-Jacobi-Bellman (HJB) PDEs. For non-convex, non-smooth objective functions, we reformulate global optimization as an infinite horizon, optimal asymptotic stabilization control problem. The solution to the associated HJB PDE provides a value function which corresponds to a (quasi)convexification of the original objective. Using the gradient of the value function, we obtain a feedback law driving any initial guess towards the global optimizer without requiring derivatives of the original objective. We then demonstrate that this HJB control law can be integrated into other global optimization frameworks to improve its performance and robustness.