HiveBrain v1.2.0
Get Started
← Back to all entries
snippetMinor

Combining two infinite algorithms to create a single infinite algortihm

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
infinitecombiningcreatealgorithmstwosinglealgortihm

Problem

Suppose I have an algorithm $F$ that, given a numeric input $x$, generates an infinite sequence of numbers that converges to a $f(x)$, where $f$ is some continuous real-valued function.

Similarly, I have an algorithm $G$ that, given a numeric input $y$, generates an infinite sequence of numbers that converges to $g(y)$, where $g$ is another continuous real-valued function.

I would like to have an algorithm $H$ that, given $x$, generates an infinite sequence of numbers that converges to $g(f(x))$. How can I do this?

My current idea is as follows. For every algorithm $A$ and integer $i$, let $A_i(x)$ be the $i$-th value in the sequence returned by algorithm $A$ when run on input $x$.

Then, the algorithm $H$ should return the sequence:

$$G_i(F_i(x))$$

Initially, $H$ runs a single step of $F$ on $x$ and runs a single step of $G$ on the result. Then, $H$ runs two steps of $F$ on $x$ and runs two steps of $G$ on the result. This goes on infinitely.

Is this idea correct? Is there a better idea?

Solution

Is this idea correct?

(I apologize for my initial, poorly thought-out response to this question, and thank the commentors for pointing out the flaws.)

Your solution will work if $g(\cdot)$ is differentiable around $f(x)$.

Since $G(y)$ converges to $g(y)$, for any constant $\epsilon_G>0$ there exists some $N_G>0$ such that for any $y$ and all $i>N_G$,
$$-\epsilon_G 0$ such that $g(y)$ slopes toward $g(f(x))$ for all $y \in \left(f(x)-\epsilon_f, f(x)+\delta_g\right)$. Because $F$ converges to $f(x)$, there exists some $N_F>0$ such that for all $i>N_F$,
$$-\epsilon_fN_F$, as $F(x)$ approaches $f(x)$, $g(F(x))$ approaches $g(f(x))$; i.e., $g(F_i(x))$ converges to $g(f(x))$. That is, for any $\epsilon_g>0$, there exists some $N_g>N_F$ such that for all $i>N_g$,
$$-\epsilon_g 0$, set $\epsilon_G = \epsilon_g = \frac{\epsilon}{2}$. Then for all $i>N$ (where $N=\max(N_G, N_F, N_g)$),
$$-\epsilon = -(\epsilon_G+\epsilon_g) < G_i(F_i(x))-g(F_i(x)) + g(F_i(x)) - g(f(x)) < (\epsilon_G+\epsilon_g) = \epsilon,$$
or,
$$-\epsilon < G_i(F_i(x)) - g(f(x))) < \epsilon.$$
That is, $G_i(F_i(x))$ converges to $g(f(x))$.

Is there a better idea?

Probably. Your algorithm generates $O(i)$ numbers to create the $i$th output number.

Consider if the algorithms were finite (i.e., you stopped at the $n$th output). Then, you would run $F$ to $F_n(x)$, then run $G$ using this as your input. This would only take generating $O(n)$ numbers. Your problem is not finite, though.

However, you can choose some constant $c$ and only restart $G$ after every $c$ steps. That is, the $i$th output will be $G_i(F_{c\lfloor i/c\rfloor}(x))$. This will make the convergence less smooth, but it will help your running time; it will now only generate $O(i/c)$ (amortized) numbers to create the $i$th output number.

Context

StackExchange Computer Science Q#44650, answer score: 2

Revisions (0)

No revisions yet.