Conflicting Results from t-test and F-based stepwise regression in multiple regression.
I currently am tasked with building a multiple regression model with two predictor variables to consider. That means there are potentially three terms in the model, Predictor A (PA), Predictor B (PB) and PA*PB.
In one instance, I made a LS model containing all three terms, and did simple t-tests. I divided the parameter estimates by their standard errors to calculate t-statistics, and determined that only the intercept and PA*PB coefficients were significantly different from zero.
In another instance, I did stepwise regression by first creating a model with only PA, and then fit a model to PA and PB, and did an F-test based on the Sum of Squares between the two models. The F-test concluded that PB was a significant predictor to include in the model, and when I repeated the procedure, the PA*PB coefficient was found to reduce SSE significantly as well.
So in summary, the t-test approach tells me that only the cross-product term PA*PB has a significant regression coefficient when all terms are included in the model, but the stepwise approach tells me to include all terms in the model.
Based on these conflicting results, what course of action would you recommend?