Factorization Machines (FMs) are a series of effective solutions for sparse data prediction by considering the interactions among users, items, and auxiliary information. However, the feature representations in most state-of-the-art FMs are fixed, which reduces the prediction performance as the same feature may have unequal predictabilities under different input instances. In this paper, we propose a novel Feature-adjusted Factorization Machine (FaFM) model by adaptively adjusting the feature vector representations from both vector-level and bit-level. Specifically, we adopt a fully connected layer to adaptively learn the weight of vector-level feature adjustment. And a user-item specific gate is designed to refine the vector in bit-level and to filter noises caused by over-adaptation of the input instance. Extensive experiments on two real-world datasets demonstrate the effectiveness of FaFM. Empirical results indicate that FaFM significantly outperforms the traditional FM with a 10.89% relative improvement in terms of Root Mean Square Error (RMSE) and consistently exceeds four state-of-the-art deep learning based models.