patternMinor
Further papers or code on SMA*+?
Viewed 0 times
codefurtherpaperssma
Problem
I'm interested in the Lovinger and Zhang paper Enhanced Simplified Memory-bounded A Star (SMA*+). Are there any further papers on this algorithm or publicly-visible code (in any language) implementing it?
I have some things I'd like to understand better about the algorithm (apologies, they're quite basic).
I have some things I'd like to understand better about the algorithm (apologies, they're quite basic).
- What is the precise relationship between a state and a node in Algorithm 1? They are clearly not synonyms otherwise $s(n)$ would not exist.
- How does one efficiently determine the parent of $w$ in Algorithm 2? Is it the case that $\forall n \vert n \in O$, $parent(n) \in O$? If yes, is this simply because we only cull leaves?
- Some problems (including the one I'm interested in) exhibit cycles. If I encounter two identical states $n$, $m$ at different depths $g(n) > g(m)$ I can always discount $n$. As far as I can see though, nothing would seem to stop SMA*+ exploring some cycle $m \rightarrow x \rightarrow y \rightarrow z \rightarrow n \rightarrow m$ releatedly at ever-greater depth. If this idea is wrong, what prunes these cycles? If this idea is right, is there a simple way of avoiding this? Do I have to simply prune $m$ from the successors of $n$ when computing $N$?
Solution
-
Your third question provides a hint as to why $s(n) \neq n$,
where $n$ is a node and $s(n)$ is the state of $n$.
Different nodes may contain the same state
but different parents,
and the path of a node may contain the same state multiple times
if it contains a cycle.
Theoretically,
you could define $path(n) = n$.
Practically,
it is useful to define $n$ as a record or class
containing useful data about $n$,
which brings us to:
-
You can store $parent(n)$ in your data structure for $n$
to find the parent of node $w$ in $\mathcal{O}(1)$.
There is no guarantee that $parent(n) \in O$ if $n \in O$.
We remove a node from $O$
and add its children
when it is expanded
in Algorithm 1.
In Algorithm 2,
we add $parent(w)$ to $O$
if it is not already there,
while removing $w$.
-
In terms of correctness,
you typically don't have to worry about cycles.
As long as the step cost from one state to another is always greater than $0$,
which is almost always the case in practice,
each cycle increases $f(n)$,
making nodes without cycles more favorable.
In terms of performance,
exploring cycles is a waste of time.
In regular A* search
you can detect and remove nodes with cycles
by either looking in the path of a node
or maintaining a set of explored states.
It has been a while since I worked on SMA*+,
but I think I tried to add cycle detection to SMA*+
but ran into an issue.
That could be a good area for future research .
The SMA*+ code is part of a search library I was working on.
It needs cleaning up
before I would consider publishing it,
but I'll post the node code here:
I removed some code that only existed
to work around Python recursion limitations
while evaluating lazy attributes.
It was big and messy
and hopefully isn't an issue with the latest Python,
but be aware if you copy this code verbatim.
Your third question provides a hint as to why $s(n) \neq n$,
where $n$ is a node and $s(n)$ is the state of $n$.
Different nodes may contain the same state
but different parents,
and the path of a node may contain the same state multiple times
if it contains a cycle.
Theoretically,
you could define $path(n) = n$.
Practically,
it is useful to define $n$ as a record or class
containing useful data about $n$,
which brings us to:
-
You can store $parent(n)$ in your data structure for $n$
to find the parent of node $w$ in $\mathcal{O}(1)$.
There is no guarantee that $parent(n) \in O$ if $n \in O$.
We remove a node from $O$
and add its children
when it is expanded
in Algorithm 1.
In Algorithm 2,
we add $parent(w)$ to $O$
if it is not already there,
while removing $w$.
-
In terms of correctness,
you typically don't have to worry about cycles.
As long as the step cost from one state to another is always greater than $0$,
which is almost always the case in practice,
each cycle increases $f(n)$,
making nodes without cycles more favorable.
In terms of performance,
exploring cycles is a waste of time.
In regular A* search
you can detect and remove nodes with cycles
by either looking in the path of a node
or maintaining a set of explored states.
It has been a while since I worked on SMA*+,
but I think I tried to add cycle detection to SMA*+
but ran into an issue.
That could be a good area for future research .
The SMA*+ code is part of a search library I was working on.
It needs cleaning up
before I would consider publishing it,
but I'll post the node code here:
class _SMAStarPlusNode(Node):
"""Implements comparison for heapq."""
def __init__(self, node, f_cost):
# Copy attributes from node
self.__dict__ = node.__dict__
# Add SMA*+ attributes
self.f_cost = f_cost
self.num_successors = 0
self.child_fs = {}
self.open = True # Track whether in open list, to avoid scanning entire list
def __lt__(self, other):
return (self.f_cost, -self.depth) (other.f_cost, -other.depth)
def __ge__(self, other):
return (self.f_cost, -self.depth) >= (other.f_cost, -other.depth)
class Node(object):
"""A node in a search tree.
This class is very minimal, utilizing lazy evaluation for most attributes.
Since thousands of instances of this class may exist at a time, every attribute
adds greatly to the memory requirements of the program.
"""
def __init__(self, parent, state):
self.parent = parent
self.state = state
# Lazily evaluated attributes (added when needed):
# _path_cost - added in Problem.path_cost
# _depth
# _path
@property
def depth(self):
"""Return the depth of this node."""
# Lazily evaluate depth
try:
return self._depth
except AttributeError:
return self._make_depth()
def _make_depth(self):
try:
# Non root have 1 more depth than their parent
_depth = self.parent.depth + 1
except AttributeError:
# Root node has depth of 0
_depth = 0
# Cache and return
self._depth = _depth
return _depth
@property
def path(self):
"""Return the path to this node.
Returns:
tuple; a tuple of states.
"""
# Lazy evaluation
try:
return self._path
except AttributeError:
return self._make_path()
def _make_path(self):
try:
# Add current node to path to parent node
_path = self.parent.path + (self.state,)
except AttributeError:
_path = (self.state,)
# Add to cache
self._path = _path
return _path
class Problem(object):
...
def path_cost(self, node):
"""Return the path cost to a given node.
This method belongs to Problem instead of Node because Node does
not store Problem (to minimize memory usage).
"""
# Lazily evalute path cost
try:
return node._path_cost
except AttributeError:
if node.parent is None:
_path_cost = 0.0
else:
_path_cost = (self.path_cost(node.parent) +
self.step_cost(node.parent.state, node.state))
# Cache
node._path_cost = _path_cost
return _path_cost
I removed some code that only existed
to work around Python recursion limitations
while evaluating lazy attributes.
It was big and messy
and hopefully isn't an issue with the latest Python,
but be aware if you copy this code verbatim.
Context
StackExchange Computer Science Q#134925, answer score: 3
Revisions (0)
No revisions yet.