In prior work, a machine learning approach was used to develop a suggestion system for 80 privacy settings, based on a limited sample of five user preferences. Such suggestion systems may help with the user-burden of preference selection. However, such a system may also be used by a malicious provider to manipulate users’ preference selections through nudging the output of the algorithm. This paper reports an experiment with such manipulation to clarify the impact and users’ resistance of or susceptibility to such manipulation. Users are shown to be highly accepting of suggestions, even where the suggestions are random (though less so than for nudged suggestions).