{\bf Addendum -- Borg Processes} 7/8/93
The original trisection algorithm is not quite the same as the
algorithm for $k$-secting an angle. The sequence generated by the first
trisection algorithm is:
$$3\over 8$$
$$11\over 32$$
$$43\over 128$$
$$171\over 512$$
but the sequence generated by the general $k$-section algorithm is
$$1\over 4$$
$$5\over 16$$
$$21\over 64$$
$$85\over 256$$
and so on. Yet both converge to one-third; what is going on?
\vskip 1cm
{\bf What Is Going On}
Let's say we have our angle $\alpha$, and we want to $k$-sect it
using the $k$-section algorithm,
but before we begin the process $\alpha$ gets perturbed by some amount
$\xi$ (if you are of a more mathematical bent you can let $\xi = 1+\epsilon$).
So we know $\alpha$, but instead of starting
the iteration with $\alpha$ we start it with $\xi\alpha$. The first term
in the iteration sequence would then be
$$\xi\over q$$
(where $q = k+1$).
Now we just merrily chug along, using the correct $\alpha$, and generate
the following sequence:
$$\xi + q\over q^2$$
$$\xi + q + q^2\over q^3$$
$$\xi + q + q^2 + q^3\over q^4$$
und so weiter. So, this is similar to the sequence in the regular $k$-section
algorithm,
$${1\over q^N}\sum_{i=0}^{N-1} q^i$$
but in this case it is
$${1\over q^N}\Bigl(\xi + \sum_{i=1}^{N-1} q^i\Bigr)$$
or
$${1\over q^N}\Bigl(\xi - 1 + \sum_{i=0}^{N-1} q^i\Bigr).$$
If we simplify by using the property
$$(q-1)\sum_{i=0}^{N-1} q^i = q^N - 1$$
we then get
$$\eqalign{{1\over q^N}\Bigl(\xi - 1 + \sum_{i=0}^{N-1} q^i\Bigr) &=
{1\over q^N}\Bigl(\xi - 1 + {q^N - 1\over q-1}\Bigr)\cr
&= {1\over q^N}(\xi - 1) + {1\over q-1}\Bigl(1 - {1\over q^N}\Bigr)\cr
&= {1\over k} + \Bigl(\xi - {k+1\over k}\Bigr)
\Bigl({1\over k+1}\Bigr)^N\cr}$$
So, the sequence still converges to
$$1\over k$$
with a slightly different error term.
\vskip 1cm
{\bf What Is Really Going On}
There is a more subtle aspect to the process. To maintain the
purity of the iteration, defined as
$$x_i = {1\over q}(x_{i-1} + 1)$$
let us simply start iterating from an arbitrary point $x_0$. The first
iteration would then produce
$$x_1 = {1\over q}(x_0 + 1).$$
The next two iterations would produce
$$\eqalign{x_2 &= {1\over q^2}(x_0 + 1 + q)\cr
x_3 &= {1\over q^3}(x_0 + 1 + q + q^2)\cr}$$
and in general
$$x_N = {1\over q^N}\Bigl(x_0 + \sum_{i=0}^{N-1} q^i\Bigr)$$
or
$$x_N = {1\over k} + \Bigl(x_0 - {1\over k}\Bigr)\Bigl({1\over k+1}\Bigr)^N.$$
An obvious question then is how to pick $x_0$? It should be clear
that you can start the process at any point $x_0$ and it will still converge
to the desired result, and this is why errors can happen during the
process: an error is just like starting from a new $x_0$.
The task is to choose an optimal $x_0$ to minimize
the error.
The way the $k$-section algorithm works is that it uses the current
value to find a ``better'' $x_0$, and then runs itself again, starting from
this ``better'' $x_0$. It is this property that allows the alogrithm to absorb
errors at any point in the calculation; all an error does is force the
iterations to begin at a new starting point. The iteration simply keeps
moving towards the value $1/k$, with each iteration starting from a ``better''
starting point.
The best choice for $x_0$ is one that is close to $1/k$ -- the closer
the better. What the $k$-section algorithm does is to choose $x_0 = 0$,
which will always be pretty close to $1/k$, especially as $k$ gets large.
Alternatively, if you would like to look at it another way, by choosing
$x_0 = 0$ this forces the first iteration to have the value $1/ k+1$. This
will always be very close to $1/k$ -- I do not know how to show it, but
$1/ k+1$ is always closer to $1/k$ than $1/ k-1$ is. So, in terms of choosing
a starting value for an arbitrary $k$, this works out nicely, perhaps even
optimally, from a computational point of view.
If we consider the simple case, $k=3$, we can calculate fractions
that are close to $1/3$ which would provide good values for $x_0$. Since
we are using a compass and only know how to bisect, the denominator of
the fraction must be a power of two. The most obvious starting place of
this nature would be $1/4$; however, this is what the algorithm will choose
as the first iteration. Next would be $3/8$.
This is the number at which I started
the original trisection algorithm with. The next best place to start would
be $5/16$.
But this is what the general algortihm would
generate as the second iterate! All the algorithm is doing is approximating
fractions as a fraction whose denominator is a power of two. So, by starting
at zero, the algorithm automagically computes the most optimal next starting
point, in this case a fraction with a power of two denominator, given
the limitations of the surrounds (i.e. compass and straightedge).
This, however, is from a purely computational point of view. From
a practical point of view, you are sitting there with an angle and a method
which starts from an arbitrary point. Therefore, the best way to go would
probably be to just guess where the $k$-section is, and iterate from there.
In the above example, your guess would undoubtably be closer to $1/3$ than
the algorithm's first guess, $1/4$.
As a philosophical note, this is ideal. The purpose of this was to
have a geometrical method for $k$-secting an angle, without the need for any
special calculations (although you do need to be able to prime-factor $k$).
Not only does the algorithm do so, but it does so in a way that optimizes the
number of constructions that are needed to be performed. The method is simple,
neat, and general.
\vskip 1cm
{\bf Fun Stuff}
Below are some reated issues I have been playing with.
\vskip 1cm
{\bf Borg Processes}
Let us say that we have an arbitrary process which we start at a
point $x_0$, and it progresses through a series of steps and eventually
arrives at a point $x$. Let us then say that this process simply
assimilates any errors that might be introduced, and keeps going towards
the end point $x$. Let us call such a process a {\it Borg Process}.
So, for instance, we could start a given Borg Process at
{\it any} point $x_0$ and it will {\it always} end at the point $x$.
Intermediate
steps can have errors, the process itself can have errors, but as long as
the number of errors is finite the Borg will eventually get to $x$.
I am wondering if Borg functions may be intentionally constructed
to perform certain tasks.
The most interesting property of the Borgs is that errors
get crushed, due to the fact that the iteration constantly divides by $1/q$.
In almost every process I can think of, introducing a little
error at the beginning can cause wild divergence at the end -- a ``butterfly
effect'' of sorts. In this case, errors just get damped out -- a
``cocoon effect''. I would think that this would be a very useful property
in many situations.
In the case of the $k$-section Borg, the process is a recursive
one, which generates a sequence defined by
$$x_{i+1} = {1\over k+1}(x_i + 1).$$
Are all recursive sequences Borg Processes? Certainly not! Consider the
Fibbonacci sequence:
$$x_{i+1} = x_i + x_{i-1}.$$
The $k$-section Borg can be started at any point, so that $x_0$ is arbitrary
for the process.
If we start the Fibbonacci process at points other than one and
two, we get a sequence of numbers which is very different than the Fibbonacci
series, and gets farther and farther away from the original Fibbonacci
series at each successive step.
How about a Taylor series? Consider $\sin(x)$. The series expansion
for it may be written as
$$\sin(x) \sim S_i$$
where
$$S_i = S_{i-1} + {{-1^i}\over (2i+1)!}x^{2i+1}$$
or
$$S_i = {1\over (2i+1)!}\Bigl((2i+1)!S_{i-1} + {-1^i}x^{2i+1}\Bigr)$$
with
$$S_0 = x.$$
If we let $S_0$ equal, say, $2x$, the result will be quite different than
$\sin(x)$.
One process that is Borgish is a Newton Iteration for finding roots
of an equation. You can start it at any point, and it will go to work.
Unfortunately, it is a timid Borg process, because there are lots of curves
for which it can get stuck if started at certain points. It might also
converge to the ``wrong'' root if started at certain points.
\vskip 1cm
{\bf Post Addendum Addendum}
As weird as the above may be, the method is simply an iterative
process with an infinite basin of attraction, as was pointed out to me by
Hermann Riecke.
\vskip 1cm
{\bf Getting off the track}
One of the tricks to the Borg Process above is that previous
terms are multiplied by $1 / k+1$, which assimilates the error into the
sequence. What happens if we try to make a Borg Fibbonacci, and define the
sequence as
$$x_{i+1} = {1\over k+1}(x_i + x_{i-1})$$
with $x_0 = x_1 = 1/ k+1$? In this case it will simply go to
zero, except for $k=1$, where it converges to $0.5$.
A perhaps more interesting Borg Fibbonacci is
$$x_{i+1} = {1\over k+1}(x_i + x_{i-1}) + 1$$
which will converge to
$${k+1\over k-1}.$$
Ah ha, you are wondering about $k=1$. Well, strange and funky things happen
when $k=1$.
For values of $k$ greater than $1$ the difference between successive numbers
goes to zero.
When $k=1$, it looks like the difference between successive Borg Fibbonacci
goes to $2/3$.
\vskip 1cm
{\bf Stuff and some junk}
On an entirely unrelated note, there are some interesting things in
the sequence generated by the $N$-section algorithm. For instance, in the
sequence for a 3-sect, at each point in the iteration, the difference between
the numerator and the denominator seems to be prime. For a 4-sect, every
other number looks rather prime-ish.
\bye