Traditional anomaly detection, using statistics and thresholds, requires detailed domain knowledge to manually define these parameter thresholds, as well as continuous human intervention to adapt the AD algorithms to changes in, among others, context and data characteristics. Machine learning-based anomaly detection tackles these issues by directly learning (ab)normal behaviour from the data without human intervention. This requires a significant amount of training data, such that techniques are often only trained once in a representative environment and then deployed in various contexts and configurations. As anomalies often correspond with different, context-dependent characteristics, a machine learning-based anomaly detection model trained for a single reference context is likely to yield false positives and negatives when deployed in a context that is too dissimilar. This creates a need for context-aware anomaly detection algorithms, which automatically adapt to changes in context, deployment environment & configurations, data stream parameters, and available resources. In this paper, we propose a methodology to address the need for context-awareness in anomaly detection by means of a novel paradigm called cross-context learning. Specifically, we enable cross-context learning for anomaly detection by combining knowledge graphs, to capture the context in a formal manner, and transfer learning, to enable the adaptation. When compared to transfer learning without context-awareness, we found performance increases of up to 6.658% on the evaluated datasets.