patternMinor
Find index of lowest unique integer in each line of the file
Viewed 0 times
uniquethefileeachlinelowestfindindexinteger
Problem
The code below finds the index of the lowest unique integer on each line of the file. The numbering starts at 1, and when there's no unique value, the
The sample input looks like this:
And the sample output is this:
I wonder if this code can be simplified or made more elegant and idiomatic in any way. It's also interesting to me, could I get by w/o the
0 should be output.The sample input looks like this:
3 3 9 1 6 5 8 1 5 3
9 2 9 9 1 8 8 8 2 1 1And the sample output is this:
5
0I wonder if this code can be simplified or made more elegant and idiomatic in any way. It's also interesting to me, could I get by w/o the
IntMap and solve the problem with i.e. lists beautifully.module Main where
import Prelude hiding (filter)
import Control.Applicative (())
import Data.IntMap (IntMap, alter, empty, filter, findMin, fromList, size)
import System.Environment
counts :: [(Int, Int)] -> IntMap [Int]
counts = foldl f empty
where f m (i,v) = alter f' v m
where f' :: Maybe [Int] -> Maybe [Int]
f' (Just is) = Just (i:is)
f' (Nothing) = Just [i]
g :: IntMap [Int] -> Int
g a = i
where rs = filter ((== 1) . length) a
h m | size m == 0 = fromList [(0, [0])]
| otherwise = m
(_, [i]) = findMin $ h rs
minUnique :: [Int] -> Int
minUnique l = g r
where r = counts (zip [1..] l)
main =
head getArgs >>= readFile >>=
putStrLn . unlines . map (show . minUnique . map read . words) . linesSolution
First off, your usage of
You could also use
Second, you could shorten
Which is also probably slightly less efficient, as it will construct at least a thunk for the rest of the
Concerning different approaches - you could make a recursive function that removes all duplicates:
Once more this trades for a bit of efficiency, as the traversal in
alter can easily be replaced by insertWith:counts :: [(Int, Int)] -> IntMap [Int]
counts = foldr f empty
where f (i,v) = insertWith (const (i:)) v [i]You could also use
fromListWith, which is neater but less efficient due to the need to concatenate lists:counts :: [(Int, Int)] -> IntMap [Int]
counts = fromListWith (flip (++)) . map (\(i,v) -> (v,[i]))Second, you could shorten
g quite a bit as well using a view:g a = case minView $ filter ((== 1) . length) a of
Nothing -> 0
Just ([i],_) -> iWhich is also probably slightly less efficient, as it will construct at least a thunk for the rest of the
IntMap.Concerning different approaches - you could make a recursive function that removes all duplicates:
import Data.List
import Data.Ord
dedup :: [(Int,Int)] -> [(Int,Int)]
dedup [] = []
dedup ((i,v):ps)
| null dups = (i,v):dedup ps
| otherwise = dedup ndups
where (dups, ndups) = partition ((==v) . snd) ps
minUnique :: [Int] -> Int
minUnique l
| null ddups = 0
| otherwise = fst $ minimumBy (comparing snd) $ ddups
where ddups = dedup $ zip [1..] lOnce more this trades for a bit of efficiency, as the traversal in
partition makes it O(n²). Not sure whether that can be helped though without involving more complex data structures.Code Snippets
counts :: [(Int, Int)] -> IntMap [Int]
counts = foldr f empty
where f (i,v) = insertWith (const (i:)) v [i]counts :: [(Int, Int)] -> IntMap [Int]
counts = fromListWith (flip (++)) . map (\(i,v) -> (v,[i]))g a = case minView $ filter ((== 1) . length) a of
Nothing -> 0
Just ([i],_) -> iimport Data.List
import Data.Ord
dedup :: [(Int,Int)] -> [(Int,Int)]
dedup [] = []
dedup ((i,v):ps)
| null dups = (i,v):dedup ps
| otherwise = dedup ndups
where (dups, ndups) = partition ((==v) . snd) ps
minUnique :: [Int] -> Int
minUnique l
| null ddups = 0
| otherwise = fst $ minimumBy (comparing snd) $ ddups
where ddups = dedup $ zip [1..] lContext
StackExchange Code Review Q#40977, answer score: 5
Revisions (0)
No revisions yet.