I and my 'students' have solved hundreds if not thousands of very complex problems and never calculated a p value or performed a null hypothesis test. We have applied the same approaches to new product development - quite successfully.
That's great, but it's not science. Science does not seek to solve problems or develop products; it simply seeks "knowledge" not to be confused with what people mean say they "know" something. (Nor does science seek truth.)
Simply saying the p value is less than .05 (a limit that Fisher pulled out of his back pocket with little to no thought at the dawn of statistics as a profession) without detailing the study design, including the sample sizes, and the underlying science is tantamount to scientific malpractice.
Agreed, but I think the malpractice usually comes in more on the other side of the equation. It is hard (although regrettably not impossible) to do a study without knowing the details of your study design and your sample size. The underlying science, that can be iffy. It is the accepting of a p-value alone that is the more common failing, IMO, whether it is practiced by journals or their readers. (Don't get me started on the media.)
While this behavior is quite common, it sounds like you have encountered virtually nothing else within your personal universe. In my universe, those who practice science well are few and far between, but not nonexistent.