One of the current challenges concerning improving recommender systems consists of finding ways of increasing serendipity and diversity, without compromising the precision and recall of the system. One possible way to approach this problem is to complement a standard recommender by another recommender “orthogonal” to the standard one, i.e. one that recommends different items than the standard. In this paper we investigate to which extent an inverted nearest neighbor model, k-furthest neighbor, is suitable for complementing a traditional kNN recommender. We compare the recommendations obtained when recommending items disliked by people least similar to oneself to those obtained by recommending items liked by those most similar to oneself. Our experiments show that the proposed furthest neighbor method provides more diverse recommendations with a tolerable loss in precision in comparison to traditional nearest neighbor methods. The recommendations obtained by kfurthest neighbor-based approaches are almost completely orthogonal to those obtained by their k-nearest neighborsbased counterparts.