Columbia University says it “embraces generative AI tools” in its official AI policy, as the University quietly rolls out enterprise‑grade AI platforms, from CHAT to Claude for Education. Meanwhile, students and faculty rarely see the contracts or procurement decisions behind these integrations. AI at Columbia is moving into vital online infrastructure, degrees, and administrative workflows, yet there is still no public map of how these tools were chosen or under what terms they operate.

The Columbia University Generative AI policy begins with‌ the statement that: “Columbia University is dedicated to advancing knowledge and learning, and embraces generative AI tools․” The University policy is written from the assumption that AI adoption is an unambiguously progressive thing, with little recognition that learning and effective engagement with AI tools might be in tension, or even mutually exclusive․ Beyond this, the policy simply asks students to self-disclose their use of AI tools, which is‌ practically unenforceable, and out of touch with how students currently use these tools. 

At the same time, Columbia is rapidly expanding artificial intelligence integration across‌ the entire campus․ The University last month approved a new master’s degree in artificial intelligence in the School of Engineering and Applied Science (SEAS) after SEAS Dean Shi-Fu Chang‌ said that Columbia was falling behind peer institutions in artificial intelligence․ Also under development is CHAT, an internal AI chat tool being developed for Columbia‌ computers and software․ CHAT, currently in a pilot stage by Columbia University Information Technology, makes use of ChatGPT and other models to help users draft emails, analyze‌ data, and perform other academic and professional tasks․ 

Barnard College, while not a part of Columbia’s OpenAI agreement, has offered AI tools such as Google Gemini since the fall of 2025, as an extension of its pre-existing Google and Adobe contract. This was in part an effort to “create an equitable playing field” for those who have experience and access to personal AI subscriptions and those who do not. The Academic Technologies & Learning Innovation Services (ATLIS) recommends users to approach AI with mindfulness, and as a tool rather than a shortcut. The new implementation of the AI pilot program “Millie” meant to act as an easy-access faculty concierge last December, also shares a similar disclaimer. Whenever an inquiry is submitted, a short statement claims the prompter should check with official sources to clarify details. This brings up the issue that an AI assistant cannot be held accountable, which only further shifts responsibility from the AI to the user instead of relieving it. 

A recent May 5 email to faculty and staff introduced “the availability of Claude for Education as the latest addition to our suite of AI capabilities at Columbia.” This allows greater access for Columbia affiliates, and marks another murky partnership between Columbia and a major AI company. Citing “enhanced security and privacy,” the department recommends the “CUIT Claude enterprise license … over individual licensed use of Claude.” The Office of the Provost additionally announced that the 2026 Teaching and Learning Awards would include an incentive for faculty members who incorporate AI technology‌ in their course design, and the Columbia University Board of Trustees cited Jennifer Mnookin’s pro-AI stance in her selection as president․

Columbia’s broader AI story is deeper than pilot projects or experimental tools. Rather, it is about embedding AI into degrees, teaching practices, and core administrative infrastructure. What is missing is a clear, centralized account of how the University is choosing which AI tools and partnerships to adopt, on what terms, and with what safeguards. While Columbia prides itself on “embracing” generative AI, according to student-facing administrators, the actual governance of these relationships remains fragmented across schools, offices, and technical teams. Students and faculty alike hope for larger guidance from incoming President Mnookin regarding Columbia’s stance on this important yet divisive matter, especially due to its implications upon the daily operations of a university. Until the University offers a public, consolidated map of its AI contracts, access tiers, and data‑use commitments across the institution, any claim of responsible AI leadership will ring hollow.

Staff Writer and Science Team member Yuna Chung contributed reporting.

Image via Bwog Archives