We have a set of linear constraints, represented by (*) , for a matrix and a vector . I'm looking for a procedure to decide whether the solution subspace of (*) with respect to a subvector is contained in the solution subspace with respect to another subvector (of the same dimension as ).
To put it another way: if we project the set of all solutions of (*) to the components of , is the resulting set contained in the projection to ? The order of components of and does matter.
I can imagine that there are standard procedures for answering this question. Are there efficient ones? Where do I have to look? I haven't yet found anything related in textbooks on Linear Programming, using keywords "subspace" and "sub(-)solution" -- maybe I'm using the wrong keywords?
My own attempts:
(1) Eliminate the variables outside by computing suitable positive linear combinations of the lines of . This yields a set of constraints over that are implied by (*). If we do the same for , it just remains to check whether all latter constraints follow from the former, perhaps using the simplex algorithm, which also reveals redundancies. But this seems inefficient to me because, in the worst case, one would have to compute exponentially many linear combinations.
(2) Perhaps it is possible to formulate the above problem as a new linear program, which then can be solved using simplex? I'm still missing a more precise idea here.
Perhaps someone here can put me on the right track?