There are many definitions of intelligence. Most of them include as a central element the capacity to achieve goals, together with an ingredient of generality to distinguish from narrowly applicable abilities. In these definitions, the goals themselves are left unspecified, their content has no bearing as to whether something is considered intelligence or not. In other words, intelligence and goals are decoupled, or orthogonal.
However, definitions are just… definitions. The only requirement for a definition to be valid is logical consistence. Whether it applies to the real world as a useful concept is another matter all together.
This brings us to consider whether, in practice, intelligence and goals are independent or not. Not only empirically, which is a question of observing existing cases of intelligences and their associated goal content, but also physically. In other words, whether intelligence and goals are constrained to correlate in physically realizable intelligences that do not yet exist. The main constraint that a physically realizable intelligence is subjected to is a limit to computational resources.
So, in practice, is it possible to build an intelligence with arbitrary goals? And if not, what constraints are imposed on these goals, and how do these constraints come about?
I will stop here as I think it’s not yet possible to think rigorously about these questions, although I think the questions themselves are well defined and relevant (ie for matters of AI safety). Here is some related reading
Lesswrong – Muehlhauser-Goertzel Dialogue, Part 1
 Legg  – A collection of definitions of intelligence
 I have considered intelligence from a naturalistic standpoint as an optimization process that arose in living beings to counter entropy through behavior