The following function(s) realize the trapezoidal rule for integrating a function along a discrete set of points representing the curve.
We implement an unvectorized and a vectorized form and attempt to speed it up.
# Discretized sine function
x = linspace(0, pi, 100);
y = sin(x);
Here is a vectorized form, i.e. how you would code it in MATLAB or R (or Python?).
function trapz1(x, y)
if length(y) != length(x)
error("Vectors must be of same length")
end
sum( (x[2:end] - x[1:end-1]).*(y[2:end] + y[1:end-1]) ) / 2
end
println(trapz1(x, y)); gc()
@time [trapz1(x, y) for i in 1:1000];
And this is an unvectorized, explicit coding of trapezoidal integration.
function trapz2(x, y)
local n = length(x)
if length(y) != n
error("Vectors 'x', 'y' must be of same length")
end
r = 0
if n == 1; return r; end
for i in 2:n
r += (x[i] - x[i-1]) * (y[i] + y[i-1])
end
r / 2
end
println(trapz2(x, y)); gc()
@time [trapz2(x, y) for i in 1:1000];
Why is the following redefinition of trapz2
different (and faster)?
function trapz3(x, y)
local n = length(x)
if length(y) != n
error("Vectors 'x', 'y' must be of same length")
end
r = 0.0
if n == 1; return r; end
for i in 2:n
r += (x[i] - x[i-1]) * (y[i] + y[i-1])
end
r / 2
end
println(trapz3(x, y)); gc()
@time [trapz3(x, y) for i in 1:1000];
Results and comparison with R and Python
Timings Result µs/loop
trapz1 0.020384185 1.9998321638939924 20.4
trapz2 0.009617445 1.9998321638939929 9.6
trapz3 0.001451867 1.9998321638939929 1.5
trapz 0.000740450 1.9998321638939929 0.75
R: unvect. 300 µs, vect. 130 µs, inline --
Python: unvect. 119 µs, vect. 39 µs
+numba: < 1 µs, 54 µs
A final version of trapz
, with error handling, type stability, and omitting boundary checking, will be shown later.
For explicitely given functions (as formulas), Julia provides analytical differentiation. Automatic differentiation (AD) is a technique to numerically evaluate the derivative of a function implemented as a program in a programming language, with the aim to compute derivatives to machine accuracy. This is important for solving (partial) diffeential equations, nonlinear optimization, sensitivity analysis, or design engineering.
The following example uses the lambertW function, the inverse function of x -> x * exp(x). This function is best be calculated through the Newton-Raphson-Haley method, an iteratve approach. We will apply automated differentiation -- defined in package DualNumbers -- to this iterative procedure to determine exactly the derivative of the Lambert_W function.
using DualNumbers
function lambertW(x::Number)
if x < -1.0/exp(1.0); return NaN, NaN; end
if x == -1.0/exp(1.0); return -Inf, Inf; end
w0 = 1.0
w1 = w0 - (w0 * exp(w0) - x)/((w0 + 1.0) * exp(w0) -
(w0 + 2.0) * (w0 * exp(w0) - x)/(2.0 * w0 + 2.0))
while w0 != w1
w0 = w1
w1 = w0 - (w0 * exp(w0) - x)/((w0 + 1.0) * exp(w0) -
(w0 + 2.0) * (w0 * exp(w0) - x)/(2.0 * w0 + 2.0))
end
return w1
end
The derivative can be calculated directly because it is the inverse of \(x \to x e^x\)
(remember your analysis course).
function d_lambertW(x::Real)
w1 = lambertW(x)
return 1.0 / (1 + w1) / exp(w1)
end;
Now compute the exact derivative at 1.5 and compare with automated differentiation.
x0 = 1.5;
lambertW(1.5), d_lambertW(1.5)
lambertW(Dual(x0, 1.0)) # epsilon(...)
Compare this with numerical derivatives calculates through the "central difference" formula:
# using Calculus
# Calculus.finite_difference(lambertW, 1.5)
The following optimization packages are available for Julia:
commercial prog.: CPLEX, Gurobi, Mosek[, SCIP]
Modeling: JuMP, MathProgBase, CVX
JuMP is an algebraic modeling language for Mathematical Programming in Julia. It supports linear (LP) and quadratic (QP) optimization problems with linear constraints (but not quadratic constraints) and mixed-integer linear problems (MILP).
It interfaces to a number of open-source and commercial solvers (Clp, Cbc, GLPK, Gurobi, MOSEK, and CPLEX) via a generic solver-independent interface.
JuMP is extremely fast in generating a model, even compared to commercial solvers (like AMPL or GAMS).
As an example, we will maximize the following expression \(5x_1 + 3.5x_2 + 4.5x_3\) with constraints \[3x_1 + 5x_2 + 4x_3 \le 540\] \[6x_1 + x_2 + 3x_3 \le 480\]
using JuMP, Cbc
m = Model( solver=CbcSolver() );
@defVar( m, x[1:3] >= 0)
@setObjective( m, Max, 5x[1] + 3.5x[2] + 4.5x[3] )
# @addConstraint( m, 3x[1] + 5x[2] + 4x[3] <= 540 )
# @addConstraint( m, 6x[1] + x[2] + 3x[3] <= 480 )
A = [3. 5. 4.; 6. 1. 3.];
b = [540., 480.];
for i = 1:2
@addConstraint( m, sum{A[i, j]*x[j], j=1:3} <= b[i] )
end
m
status = solve(m);
println("Objective value: ", getObjectiveValue(m))
println(getValue(x))
Knapsack problem:
As a second problem, we solve a knapsack problem: Given the odd square numbers \(1^2, ..., 43^2\), are there numbers that sum up to exactly 2014?
w = [1:2:43].^2;
n = length(w) # n = 22
m = Model( solver=CbcSolver() );
@defVar( m, x[1:n], Bin );
@setObjective( m, Max, sum{w[i]*x[i], i=1:n});
@addConstraint(m, sum{w[i]*x[i], i=1:n} <= 2014);
status = solve(m);
println("Objective value: ", getObjectiveValue(m))
println(getValue(x))
Get and check the result:
idx = [iround(getValue(x[i])) for i in 1:n]
println(idx)
# sum(w[[1,4,5,12,13,14]]) == 2014 # = 1 + 7^2 + 9^2 + 23^2 + 25^2 + 27^2
s = w[find(idx)];
println(s)
println(sum(s))