How do I compute derivative using Numpy?

Multi tool use
Multi tool use


How do I compute derivative using Numpy?



How do I calculate the derivative of a function, for example



y = x2+1



using numpy?


numpy



Let's say, I want the value of derivative at x = 5...





You need to use Sympy: sympy.org/en/index.html Numpy is a numeric computation library for Python
– prrao
Mar 26 '12 at 16:55





Alternatively, do you want a method for estimating the numerical value of the derivative? For this you can use a finite difference method, but bear in mind they tend to be horribly noisy.
– Henry Gomersall
Mar 26 '12 at 17:11




7 Answers
7



You have four options



Finite differences require no external tools but are prone to numerical error and, if you're in a multivariate situation, can take a while.



Symbolic differentiation is ideal if your problem is simple enough. Symbolic methods are getting quite robust these days. SymPy is an excellent project for this that integrates well with NumPy. Look at the autowrap or lambdify functions or check out Jensen's blogpost about a similar question.



Automatic derivatives are very cool, aren't prone to numeric errors, but do require some additional libraries (google for this, there are a few good options). This is the most robust but also the most sophisticated/difficult to set up choice. If you're fine restricting yourself to numpy syntax then Theano might be a good choice.


numpy



Here is an example using SymPy


In [1]: from sympy import *
In [2]: import numpy as np
In [3]: x = Symbol('x')
In [4]: y = x**2 + 1
In [5]: yprime = y.diff(x)
In [6]: yprime
Out[6]: 2⋅x

In [7]: f = lambdify(x, yprime, 'numpy')
In [8]: f(np.ones(5))
Out[8]: [ 2. 2. 2. 2. 2.]





Sorry, if this seems stupid, What is the differences between 3.Symbolic Differentiation and 4.by hand differentiation??
– DrStrangeLove
Apr 12 '12 at 16:55





When I said "symbolic differentiation" I intended to imply that the process was handled by a computer. In principle 3 and 4 differ only by who does the work, the computer or the programmer. 3 is preferred over 4 due to consistency, scalability, and laziness. 4 is necessary if 3 fails to find a solution.
– MRocklin
Apr 13 '12 at 16:51





Thanks a lot!! But what is [ 2. 2. 2. 2. 2.] at the last line??
– DrStrangeLove
Apr 14 '12 at 2:18





In line 7 we made f, a function that computes the derivative of y wrt x. In 8 we apply this derivative function to a vector of all ones and get the vector of all twos. This is because, as stated in line 6, yprime = 2*x.
– MRocklin
Apr 14 '12 at 13:45



NumPy does not provide general functionality to compute derivatives. It can handles the simple special case of polynomials however:


>>> p = numpy.poly1d([1, 0, 1])
>>> print p
2
1 x + 1
>>> q = p.deriv()
>>> print q
2 x
>>> q(5)
10



If you want to compute the derivative numerically, you can get away with using central difference quotients for the vast majority of applications. For the derivative in a single point, the formula would be something like


x = 5.0
eps = numpy.sqrt(numpy.finfo(float).eps) * (1.0 + x)
print (p(x + eps) - p(x - eps)) / (2.0 * eps * x)



if you have an array x of abscissae with a corresponding array y of function values, you can comput approximations of derivatives with


x


y


numpy.diff(y) / numpy.diff(x)





'Computing numerical derivatives for more general case is easy' -- I beg to differ, computing numerical derivatives for general cases is quite difficult. You just chose nicely behaved functions.
– High Performance Mark
Mar 26 '12 at 17:18





what does 2 mean after >>>print p ?? (on 2nd line)
– DrStrangeLove
Mar 26 '12 at 17:23





@DrStrangeLove: That's the exponent. It's meant to simulate mathematical notation.
– Sven Marnach
Mar 26 '12 at 17:26





@SvenMarnach is it maximum exponent?? or what?? Why does it think exponent is 2?? We inputted only coefficients...
– DrStrangeLove
Mar 26 '12 at 17:29





@DrStrangeLove: The output is supposed to be read as 1 * x**2 + 1. They put the 2 in the line above because it's an exponent. Look at it from a distance.
– Sven Marnach
Mar 26 '12 at 17:31


1 * x**2 + 1


2



The most straight-forward way I can think of is using numpy's gradient function:


x = numpy.linspace(0,10,1000)
dx = x[1]-x[0]
y = x**2 + 1
dydx = numpy.gradient(y, dx)



This way, dydx will be computed using central differences and will have the same length as y, unlike numpy.diff, which uses forward differences and will return (n-1) size vector.





What if dx isn't constant?
– weberc2
Jul 1 '15 at 21:22





@weberc2, in that case you should divide one vector by another, but treat the edges separately with forward and backward derivatives manually.
– Sparkler
Jul 2 '15 at 22:06





Or you could interpolate y with a constant dx, then calculate the gradient.
– IceArdor
Nov 16 '16 at 3:43





@Sparkler Thanks for your suggestion. If I may ask 2 small questions, (i) why do we pass dx to numpy.gradient instead of x? (ii) Can we also do the last line of yours as follows: dydx = numpy.gradient(y, numpy.gradient(x))?
– user929304
Jan 22 at 11:33


dx


numpy.gradient


x


dydx = numpy.gradient(y, numpy.gradient(x))



I'll throw another method on the pile...



scipy.interpolate's many interpolating splines are capable of providing derivatives. So, using a linear spline (k=1), the derivative of the spline (using the derivative() method) should be equivalent to a forward difference. I'm not entirely sure, but I believe using a cubic spline derivative would be similar to a centered difference derivative since it uses values from before and after to construct the cubic spline.


scipy.interpolate


k=1


derivative()


from scipy.interpolate import InterpolatedUnivariateSpline

# Get a function that evaluates the linear spline at any x
f = InterpolatedUnivariateSpline(x, y, k=1)

# Get a function that evaluates the derivative of the linear spline at any x
dfdx = f.derivative()

# Evaluate the derivative dydx at each x location...
dydx = dfdx(x)



Assuming you want to use numpy, you can numerically compute the derivative of a function at any point using the Rigorous definition:


numpy


def d_fun(x):
h = 1e-5 #in theory h is an infinitesimal
return (fun(x+h)-fun(x))/h



You can also use the Symmetric derivative for better results:


def d_fun(x):
h = 1e-5
return (fun(x+h)-fun(x-h))/(2*h)



Using your example, the full code should look something like:


def fun(x):
return x**2 + 1

def d_fun(x):
h = 1e-5
return (fun(x+h)-fun(x-h))/(2*h)



Now, you can numerically find the derivative at x=5:


x=5


In [1]: d_fun(5)
Out[1]: 9.999999999621423



Depending on the level of precision you require you can work it out yourself, using the simple proof of differentiation:


>>> (((5 + 0.1) ** 2 + 1) - ((5) ** 2 + 1)) / 0.1
10.09999999999998
>>> (((5 + 0.01) ** 2 + 1) - ((5) ** 2 + 1)) / 0.01
10.009999999999764
>>> (((5 + 0.0000000001) ** 2 + 1) - ((5) ** 2 + 1)) / 0.0000000001
10.00000082740371



we can't actually take the limit of the gradient, but its kinda fun.
You gotta watch out though because


>>> (((5+0.0000000000000001)**2+1)-((5)**2+1))/0.0000000000000001
0.0



To calculate gradients, the machine learning community uses Autograd:



"Efficiently computes derivatives of numpy code."



To install:


pip install autograd



Here is an example:


import autograd.numpy as np
from autograd import grad

def fct(x):
y = x**2+1
return y

grad_fct = grad(fct)
print(grad_fct(1.0))



It can also compute gradients of complex functions, e.g. multivariate functions.






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

lkiFJNJIVaNouN1vx56G7fIY1PnxrW0 mdNCJVL PFttKsW4 v6QGZm9e9CTfRcpyPq1kx
kwYX GmO8h6 L3Eb,imW2Dz,ZF,c,MLv82suxOIZrnmmhqWgFyXmxCt,nYDU,8b3i3O7URQcRir7DC9 j,NithlDHSIWc0v 720

Popular posts from this blog

Rothschild family

Boo (programming language)