while-loop - 实现convergence的迭代解决方案

我正在尝试计算公式中的 n:f = n^2 以迭代方式使 f 接近 3,精度从底部算起 3 个小数位。为了说明,我在这里写了一个成功完成我想要的代码。这段代码的一个问题是,我正在启动外部脚本而不是 f=n^2,并且使用这种方法整个 iteration 变得非常慢,因此无法使用。它需要太多的周期才能收敛到给定的精度。因此,我正在寻找更快的方法来实现 convergence。我的意思是“更快”以更少的周期收敛。有人向我推荐了二进制搜索算法,你能用这个小例子 f = n^2 来帮助创建它吗?

这是我的代码,它演示了我想要什么。

n = 1; % initial value
f=n^2; % function to calculate
tol = 3;   % tolerance
accuracy = 0.0000001; % accuracy

while f <= tol
    n = n+accuracy;
    f=n^2;
end
n = n-accuracy;
f=n^2;
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

更新:这是我为 Newton-Raphson 算法设置函数及其导数的示例。不幸的是,它可能是不正确的,所以我想请你帮忙。

n = 7.76; % initial guess
f_error = [7.73 9 12.404 7.659 8.66];
df_error = diff(f_error)
xNew = n - f_error./df_error

回答1

牛顿-拉夫森法

f_error(x) = x^(1/2) - N
f'_error(x) = (1/2)*x^(-1/2)
xNew = x - f_error(x)/f'_error(x)
repeat:
xNew = x - (x^(1/2) - N) / ((1/2)*x^(-1/2))

目标 value 的方法比尝试扫描它们之间的所有浮点可表示 values 更快:

n = 3; % initial guess
nNew = n;
val = 3; % value
f=val^2; % function to calculate

accuracy = 0.0001; % accuracy
error = 10000;
count = 0;

while abs(error) > accuracy
    nNew=n-(n^(1/2) - val)/((1/2)*n^(-1/2));
    error = nNew-n
    n=nNew
    count = count +1
end

disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

输出:

c = 5 <--- only 5 iterations
n = 9
f = 9

您的方法未能在 https://octave-online.net 服务器的时限内完成计算:

n = 1; % initial value
f=n^2; % function to calculate
tol = 3;   % tolerance
accuracy = 0.0000001; % accuracy
count = 0;

while f <= tol
    n = n+accuracy;
    f=n^2;
    count = count+1;
end
n = n-accuracy;
f=n^2;
disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

输出:

!!! OUT OF TIME !!!
c = 847309
n = 1.0847309
f = 1.176641126

它在 5 秒内仅移动了 0.1766,因此它至少需要 1-2 分钟才能完成,而 Newton-Raphson 方法只需要 5 个 iterations 即可完成。由于舍入误差,n=n+accuracy 方法也不适用于 big n values。如果下一个二进制可表示的 n 浮点 value 大于 n+accuracy 则它会陷入无限循环。

上面的 Newton Raphson 方法使用平方根。许多 x86 CPU 都有用于 sqrt 的 CPU 指令,因此它很可能被加速,但您也可以通过另一种 Newton-Raphson 方法找到平方根:

f_error(x) = x^2 - N
f'_error(x) = 2*x
xNew = x - f_error(x)/f'_error(x)
xNew = x - (1/2)*x - (1/2)*(N/x)

但它仍然有 N/x 的划分。如果 CPU 没有除法,那么您可以使用另一个 Newton Raphson:

f_error(x) = 1/x - N
f'_error(x) = -1/(x*x)
xNew = x - f_error(x)/f'_error(x)
xNew = x*(2-N*x)

将此应用于其他两种方法将导致完全基于乘法的版本,并且可能会更慢,因为您需要 iterations 的三次幂,因此总共可能需要数百个 iterations。

编辑:对于在远程服务器上运行的未知 f 函数,您可以使用简单的两点导数。此版本的导数是最简单的离散导数,并且比 3 点、5 点等类型的导数具有更高的误差:

% remote server function that has unknown algorithm
% so only numerical derivative will be used
function result = blackBoxFunction(x)
    result = x^(1/2); 
end

% numerical (discrete two-point 1-dimensional) derivative
% beware: the error will be h^2 so you should keep h small
% add more points (with coefficients) for h^3 or better error
function result = twoPointDerivative(x,y,h)
    result = (y-x)/(2*h); 
end

n = 3; % initial guess
nNew = n;
val = 3; % value
f=val^2; % function to calculate

accuracy = 0.0001; % accuracy
error = 10000;
count = 0;

% Newton-Raphson with a blackbox function
% f_error(n) = blackBoxFunction(n) - val
% f'_error(n) = derivative_of_blackbox(n)
% nNew = n - f_error(n)/f'_error(n)
% repeat:
% nNew = n - (blackBoxFunction(n)-val)/derivative_of_blackbox(n) 

while abs(error) > accuracy
    point1 = blackBoxFunction(n-accuracy)
    point2 = blackBoxFunction(n+accuracy)
    nNew=n-(blackBoxFunction(n)-val)/twoPointDerivative(point1,point2,accuracy);
    error = nNew-n
    n=nNew
    count = count +1
end

disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

输出:

c = 5
n = 9
f = 9

如果你需要做的 iterations 甚至少于 5,你应该有一个更好的初始猜测。也许通过 x^2 的二阶多项式逼近。(对于这里的 x 平方问题,显然 c1 + c2x + c3x*x --> c1=0, c2=0, c3=1 但是您总是可以从级数展开、遗传算法和其他启发式方法中获得帮助,以找到一组好的系数来最小化整个输入范围的 Newton-Raphson iterations,而不仅仅是 3)。

例如,您将 cos(x) 作为黑盒函数,并且想要达到 acos(x),因此您可以有效地使用多项式初始猜测:

% remote server function that has unknown algorithm
% so only numerical derivative will be used
function result = blackBoxFunction(x)
    result = cos(x); 
end

% numerical (discrete two-point 1-dimensional) derivative
% beware: the error will be h^2 so you should keep h small
% add more points (with coefficients) for h^3 or better error
function result = twoPointDerivative(x,y,h)
    result = (y-x)/(2*h); 
end

val = 0.04; % value that is the input of this problem: what is acos(0.04) ??
n = 1 - (val^2)/2 + (val^4)/24; % initial guess by polynomial
nNew = n;
f=acos(val); % function to calculate

accuracy = 0.0001; % accuracy
error = 10000;
count = 0;

% Newton-Raphson with a blackbox function
% f_error(n) = blackBoxFunction(n) - val
% f'_error(n) = derivative_of_blackbox(n)
% nNew = n - f_error(n)/f'_error(n)
% repeat:
% nNew = n - (blackBoxFunction(n)-val)/derivative_of_blackbox(n) 

while abs(error) > accuracy
    point1 = blackBoxFunction(n-accuracy)
    point2 = blackBoxFunction(n+accuracy)
    nNew=n-(blackBoxFunction(n)-val)/twoPointDerivative(point1,point2,accuracy);
    error = nNew-n
    n=nNew
    count = count +1
end

disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])

输出:

c = 3 <-- 2 less iterations !!
n = 1.530785652
f = 1.530785652

用五点导数:

% remote server function that has unknown algorithm
% so only numerical derivative will be used
function result = blackBoxFunction(x)
    result = cos(x); 
end

% numerical (discrete four-point 1-dimensional) derivative
% beware: the error will be h^4 so you should keep h small
% add more points (with coefficients) for h^5 or better error
function result = fivePointDerivative(x,h,f)
    result = (f(x-2*h)-8*f(x-h)+8*f(x+h)-f(x+2*h))/(12*h); 
end

val = -1; % value that is the input of this problem: what is acos(-1) ?? (it is PI)
n = 1 - (val^2)/2 + (val^4)/24; % initial guess by polynomial
nNew = n;
f=acos(val); % function to calculate

accuracy = 0.00000000001; % accuracy
error = 1;
count = 0;

% Newton-Raphson with a blackbox function
% f_error(n) = blackBoxFunction(n) - val
% f'_error(n) = derivative_of_blackbox(n)
% nNew = n - f_error(n)/f'_error(n)
% repeat:
% nNew = n - (blackBoxFunction(n)-val)/derivative_of_blackbox(n) 
h_diff=0.0000001
while abs(error) > accuracy

    nNew=n-(blackBoxFunction(n)-val)/fivePointDerivative(n,h_diff,@blackBoxFunction);
    error = nNew-n
    n=nNew
    count = count +1
end

disp(['c = ' num2str(count,15)])
disp(['n = ' num2str(n,15)])
disp(['f = ' num2str(f,15)])

输出:

c = 29
n = 3.14159265683019
f = 3.14159265358979
              ^ 3e-9 error

此外,如果黑盒函数非常昂贵,那么这个简单的导数会很慢,但您仍然可以搜索非均匀离散导数方法,以重用早期 iterations 的黑盒输出以最小化运行时间。

相似文章

r - 使用 optim 查找 OLS 解决方案

我正在尝试使用optim找到线性模型的解决方案。我可以使用lm甚至使用矩阵乘法来找到明确的解决方案。然而,最终目标是使用不同的目标函数。由于我无法复制lm解决方案,因此我的代码有问题,我无法弄清楚。我...