HiveBrain v1.2.0
Get Started
← Back to all entries
patterncppMinor

Caches implementation in C++

Submitted by: @import:stackexchange-codereview··
0
Viewed 0 times
implementationcachesstackoverflow

Problem

I had a test task for internship where the main part was in implementing fixed size caches with different displacement policies (LRU/LFU/FIFO). I did that task, but was refused afterwards. Now I am wondering how my solution might be improved?

Requirements for an implementation are:

  • Cache must be thread-safe



  • Operations Put and Get have to be implemented (for storing and getting values by key respectively)



LRU Cache implementation:

```
#include
#include
#include
#include
#include
#include
#include

template
class lru_cache {
public:
using value_type = typename std::pair;
using value_it = typename std::list::iterator;
using operation_guard = typename std::lock_guard;

lru_cache(size_t max_size) : max_cache_size{max_size} {
if (max_size == 0) {
max_cache_size = std::numeric_limits::max();
}
}

void Put(const Key& key, const Value& value) {
operation_guard og{safe_op};
auto it = cache_items_map.find(key);

if (it == cache_items_map.end()) {
if (cache_items_map.size() + 1 > max_cache_size) {
// remove the last element from cache
auto last = cache_items_list.crbegin();

cache_items_map.erase(last->first);
cache_items_list.pop_back();
}

cache_items_list.push_front(std::make_pair(key, value));
cache_items_map[key] = cache_items_list.begin();
}
else {
it->second->second = value;
cache_items_list.splice(cache_items_list.cbegin(), cache_items_list,
it->second);
}
}

const Value& Get(const Key& key) const {
operation_guard og{safe_op};
auto it = cache_items_map.find(key);

if (it == cache_items_map.end()) {
throw std::range_error("No such key in the cache");
}
else {
cache_items_list.splice(cache_items_list.begin(), cache_items_list,
it->second);

return it->second->second;
}
}

bool Exists(const Key& key) const noexcept {
opera

Solution

It looks like you put together a nice package given this was an internship application. You've used templates, stl, etc. and some of the newer features of C++, so in my opinion you showed a lot of promise. Perhaps there were other factors beyond your control that affected the candidate selection.

That said, to address your specific questions, here are some tips that jumped out:

  1. No concrete interface defined



You were given a specification of a Get/Put interface, so it would have been nice to see an abstract base class implementation that your caches inherited from that enforced the implementation of the Get/Put methods. This provides two big advantages:

  • Future developers who want to extend your design with new cache algorithms have a 'contract' in the form of a base class they must follow which provides greater cohesion with the rest of your system.



  • Programmers wishing to use your new cache classes in their systems can maintain pointers to the base class which allows them to remain ignorant of the implementation details for each cache type. This encapsulation is very handy as your programs grow large.



  1. Duplicate code



If you look at your three cache implementations, you'll notice a lot of areas that are repeated. Apart from the Get/Put methods, your thread-safety operation guards and all of your member variables could easily be pushed to a common base class. This also provides two handy features:

-
Removing the thread-safety mechanism from the 'cache-algorithm' (LRU/FIFO, LFU) decouples the two distinct problems into two separate domains. This allows the thread safety implementation to be overhauled/replaced as needed in the future without touching the core algorithm code. NOT doing this isn't a show-stopper, because your locking logic was quite small, but showing an injectable thread-locking mechanism might have scored some additional 'points' in the evaluation of your code.

-
Once you start pushing the cache_items_map, max_cache_size, safe_op members into a common base class, you'll start to notice that your 'cache-algorithm' bits of code (i.e. the actual logic for the LRU/LFU/FIFO cache lookup and rollover) is actually independent of how the data is stored in the base class. You may find that you can refactor the algorithm logic out of the class entirely into its own separate class. This would be an example of the Strategy Design Pattern

  1. Test code



It's been awhile since I had to do a technical interview that involved active coding, but it's good practice to create test code for any new software you write to validate your logic. I've not actually tried to compile/run your sample code above, but are you sure it works? Sure enough to send it up on a Mars rover, put it into a medical device, monitor a nuclear facility, or perform high-frequency trades on the stock market? Providing test code provides (you guessed it) two big advantages:

  • Dogfooding: Test code requires you to use your shiny new classes as an application developer rather than an software architect. One of the best ways to streamline your software is to spend some time using it. You'll find the areas that are awkward to type, hard to instantiate, hard to delete, etc.



  • Verification: It's hard to evaluate an algorithm without running it through its paces. Does creating an instance of lfu_cache and trying to populate std::numeric_limits::max() number of ints work well? How about 256byte sized structs, or 32MB pictures? Does your rollover work properly? Does the LFU really purge the least frequently used item? etc. etc. Providing verification code details exactly how your logic performs, and provides the surety to both you (and your interviewers) that your classes will work as advertised.



Good luck with your job search. If you're writing code like pre-employment, I think you'll do just fine.

Context

StackExchange Code Review Q#128560, answer score: 8

Revisions (0)

No revisions yet.