1. NodeBox 1
    1. Homepage
    2. NodeBox 3Node-based app for generative design and data visualization
    3. NodeBox OpenGLHardware-accelerated cross-platform graphics library
    4. NodeBox 1Generate 2D visuals using Python code (Mac OS X only)
  2. Gallery
  3. Documentation
  4. Forum
  5. Blog

path distance question

Posted by Dylan on Aug 01, 2007

I noticed that for a bezier path p, bezier.length(p) and p.length yield slightly different results. Is one more accurate than the other?

Incidentally, I ask this because I am writing some code to reparameterize curves by distance traversed instead of t. (The result will be a dictionary of corresponding d to t values. That way if you need to do many similar calculations on a given path, you need only do a quick dictionary lookup rather than perform the same calculations multiple times.)


 
Posted by Tom De Smedt on Aug 03, 2007

Hi Dylan,

If you are still using the external Bezier library, try avoiding it. The math bundled in NodeBox 1.8.5 is much faster and has several subtle bug fixes.

The difference between bezier.length(path) and path.length is precision: bezier.length(path) integrates in 20 steps by default, path.length in 10. So the bigger your path gets, the bigger the difference (but it will probably never be larger than 1% which is negligible in a graphics application).

The reason behind this is speed optimisation. If you need accurate precision, call bezier.length(path, n=50) for example.

It might be useful for you to know that a path already keeps a cache of its segment lengths, so that when a user repeatedly queries points between 1 and 100, it doesn't have to recalculate.

Perhaps when your code is finished you should post some of it, I'm interested.

Hope that helps!



Posted by Dylan Saloner on Aug 03, 2007

Here's some some code to find the t parameter of a bezier curve, given the distance traversed along it. (To find the distance, given t, just don't call the swap_dictionary and.) The idea is to return this value in log(n) time, where n is the number of samples used in the path length integrator.

Here's how it's used:

p = #some path
dic  = make_d_to_t_dict(p)
sorted_keys = dic.keys()
sorted_keys.sort()
d = #a distance d along p (< path's length)
t = d_to_t(d, dic, sorted_keys)
The code won't fit in this post, but hopefully it'll fit in the next.



Posted by Dylan on Aug 03, 2007

I'll have to spread the code out over two posts.

# called during initialization process (need not be superfast)
def make_d_to_t_dict(path, error = .1):
    """
    Given a path, return a dictionary d_to_t_dict so that if dist is the distance traversed along the curve,
    the corresponding curver parameter t = d_to_t_dict[dist].  Because this is a numerical procedure and hence
    imprecise, it is mandated so that the total length of the curve be precise to within error (measured in meters)
    e.g.: 0 = d_to_t_dict[0], 1 = d_to_t_dict[curve length]
    """
    return swap_dictionary(make_t_to_d_dict(path, error))
    
 
# called during initialization process (need not be superfast)
def swap_dictionary(original_dict):
    """
    Swap keys and values of a dictionary (values should be distinct).
    """
    return dict([(v, k) for (k, v) in original_dict.iteritems()])
        
# called during initialization process (need not be superfast)            
def make_t_to_d_dict(path, error):
    """
    see make_d_to_t_dict(path)
    Given a parametric function and an acceptible error for the total arc
    length of the curve, return a mapping of points in the parameter space
    to the corresponding distance traversed along the curve, represented
    as a dictionary.
    """
    n = 2
    t = t_refine(n)
    ttpd = make_t_to_point_dict(path,t)
    ttdd = make_t_to_d_dict_from_ttpd(ttpd)
    old_arc_length = ttdd[1.0]
    er = old_arc_length
    while (er > error):
        n = 2*n
        t = t_refine(n)
        ttpd = make_t_to_point_dict(path,t)
        ttdd = make_t_to_d_dict_from_ttpd(ttpd)
        er = ttdd[1.0] - old_arc_length
        old_arc_length = ttdd[1.0]
    return ttdd
    leng = ttdd[1.0]
 
 
# called during initialization process (need not be superfast)
def make_t_to_point_dict(path, t):
    """
    Given a parametric path mapping [0,1] -> R_n and a set of parameter points, 
    return the correspondings pts in R_n as a dictionary of t and R_n
    """
    t_to_point_dict = {}
    for s in t:
        t_to_point_dict[s] = path.point(s).x, path.point(s).y
    return(t_to_point_dict)
 
# called during initialization process (need not be superfast)
def make_t_to_d_dict_from_ttpd(ttpd):
    """
    Given a t to point dictionary, ttpd, use a simple linear approximation
    to numerically integrate the traveresed arc length given t.
    Return a dictionary with keys ttpd.keys() and values the corresponding
    traversed distance.
    """
    t = ttpd.keys()
    t.sort()
    t_to_d_dict = {0:0}
    for i in range(1,len(t)):
        added_dist = math.sqrt((ttpd[t[i]][0] - ttpd[t[i-1]][0])**2 + (ttpd[t[i]][1] - ttpd[t[i-1]][1])**2)
        t_to_d_dict[t[i]] = t_to_d_dict[t[i-1]] + added_dist
    return(t_to_d_dict)



Posted by Dylan on Aug 03, 2007

The rest of it:

# called during initialization process (need not be superfast)
def t_refine(n):
    """
    Generate a list of n equally spaced reals with end points 0 and 1.
    """
    return map(lambda(x): x/float(n-1), range(n)) 
        
 
# called during the simulation (need for speed)
def get_index(x, mon_list):
    """
    Given a list of monotonically increasing comparables, and x, return n
    such that mon_list[n] <= x <= mon_list[n+1].  For a list of length m,
    this should be done in o(log(m)) time.  Raise an exception if x
    exceeds the range of mon_list.
    This function is intended to be used to look up values in d_to_t_dict, which
    has a monotonic index.
    """
    if x < mon_list[0] or x > mon_list[len(mon_list) - 1]:
        print 'x out of range'
    else:
        return get_index_help(x, mon_list, 0, len(mon_list) - 1)
    
 
# called during the simulation (need for speed)
def get_index_help(x, mon_list, left_range, right_range):
    """
    Recursive helper for get_index.  Does the real work.  Hones in on
    index using binary search.
    """
    if left_range == right_range - 1:
        return left_range
    else:
        n = (left_range + right_range)/2
        if mon_list[n] < x:
            return get_index_help(x, mon_list, n, right_range)
        else:
            return get_index_help(x, mon_list, left_range, n)
 
 
# called during the simulation (need for speed)
def d_to_t(d, d_to_t_dict, sorted_keys):
    """
    Use the precalculated d_to_t_dict to find t corresponding to d.  d will likely not be a
    key in the dictionary so use corresponding linearly weighted averages of the two closest distance keys.
    The keys of d_to_t_dict are sorted ahead of time (during the initialization), since this must be a fast
    lookup.
    """
    # find the left and right range of the distance indices and the correspondng t values
    # get_index is performed in o(log) time
    d_index_left = get_index(d, sorted_keys)
    d_index_right = d_index_left + 1
    d_left, d_right = sorted_keys[d_index_left], sorted_keys[d_index_right]
    t_left, t_right = d_to_t_dict[d_left], d_to_t_dict[d_right]
    # return t, the weighted average of the t boundaries, where the weight is the same as
    # that of d wrt the d boundaries
    return ((d - d_right)*t_left + (d_left - d)*t_right)/(d_left - d_right)



Posted by Tom De Smedt on Aug 06, 2007

Thanks! I'm going to play around with it as soon as I have some time - that would be the day after tomorrow. Maybe we should think about integrating the code into a cachepath command or something.

Best regards,
Tom