patternpythonMinor
Sequence processing problems
Viewed 0 times
processingproblemssequence
Problem
Problem 1:
Sum the even numbers for the first n fibonacci numbers
Solution:
Problem 2:
List the letters in the acronym for a name, which includes the first letter of each capitalized word.
Solution:
Functions
Functions
My question:
After using predefined functions
Sum the even numbers for the first n fibonacci numbers
Solution:
def fib(k):
"""Compute kth fibonacci number.
>>> fib(1)
0
>>> fib(2)
1
>>> fib(11)
55
"""
def recursive_fib(prev, curr, k):
if k - 1 == 0:
return curr
else:
return recursive_fib(curr, prev + curr, k - 1)
return recursive_fib(1, 0, k)
def iseven(n):
if n % 2 == 0:
return True
def sum_even_fibs(n):
"""Sum the first even fibonacci numbers
>>> sum_even_fibs(11)
44
"""
return sum(filter(iseven, map(fib, range(1, n + 1))))
def sum_even_fibs_gen(n):
"""Sum the first even fibonacci numbers
>>> sum_even_fibs_gen(11)
44
"""
return sum(fib(k) for k in range(1, n+1) if fib(k) % 2 == 0)Problem 2:
List the letters in the acronym for a name, which includes the first letter of each capitalized word.
Solution:
def first(s):
return s[0]
def iscap(s):
return len(s) > 0 and s[0].isupper()
def acronym(name):
"""Return a tuple of the letters that form the acronym for name.
>>> acronym('University of California Berkeley')
('U', 'C', 'B')
"""
return tuple(map(first, filter(iscap,name.split())))
def acronym_gen(name):
"""Return a tuple of the letters that form the acronym for name.
>>> acronym_gen('University of California Berkeley')
('U', 'C', 'B')
"""
return tuple(w[0] for w in name.split() if iscap(w))Functions
fib, acronym and sum_even_fibs are written in functional paradigm and tested.Functions
acronym_gen and sum_even_fibs_gen are written in imperative paradigm and tested.My question:
After using predefined functions
split filter map in above 5 user defined functions, are the aforementioned paradigms being implemented correctly? If no, please let me know of the improvements.Solution
Your
By storing the results of previous calls, you can bring total complexity down to O(n) as each consecutive call will take O(1) and you do O(n) calls.
An easy way to do this with Python 3 is to use functools lru_cache decorator as so:
As a side note, there is a fun fact about fibonacci numbers that every 3rd one is even and the rest are odd, so you can use that to avoid checking parity. See my answer for more details.
As for your other answer, while you seem to have the correct code, I just would like to present an alternative solution for the sake of completeness which first extracts the first letter of every word and then filters by capitalization. I also made use of lambdas for such simple functions.
I would warrant that your code runs faster though as it does a map on a filtered result rather than a filter on a mapped result, as filter usually reduces the search space while map does not
fib function would benefit strongly from memoization, as currently you don't reuse the values you've already calculated to save on computations, resulting in O(n^2) computation time, as each fib(k) takes O(n) on average and you do O(n) calls. By storing the results of previous calls, you can bring total complexity down to O(n) as each consecutive call will take O(1) and you do O(n) calls.
An easy way to do this with Python 3 is to use functools lru_cache decorator as so:
from functools import lru_cache
@lru_cache(maxsize=None)
def function_to_memoize(n):As a side note, there is a fun fact about fibonacci numbers that every 3rd one is even and the rest are odd, so you can use that to avoid checking parity. See my answer for more details.
As for your other answer, while you seem to have the correct code, I just would like to present an alternative solution for the sake of completeness which first extracts the first letter of every word and then filters by capitalization. I also made use of lambdas for such simple functions.
def acronym(name):
return tuple(filter(lambda x: x.isupper(), map(lambda x: x[0], name.split())))I would warrant that your code runs faster though as it does a map on a filtered result rather than a filter on a mapped result, as filter usually reduces the search space while map does not
Code Snippets
from functools import lru_cache
@lru_cache(maxsize=None)
def function_to_memoize(n):def acronym(name):
return tuple(filter(lambda x: x.isupper(), map(lambda x: x[0], name.split())))Context
StackExchange Code Review Q#85799, answer score: 8
Revisions (0)
No revisions yet.