ETHICS STATEMENT
HERIntelAI Systems LLC
Last Updated: October 28, 2025
Our Ethical Foundation
At HERIntelAI, ethics aren't an add-on—they're built into every line of code, every product decision, and every user interaction.
Human • Ethical • Responsible isn't just our tagline. It's our operational framework.
HUMAN-LED INTELLIGENCE™: OUR COMMITMENT
What This Means
It's not about keeping humans in the loop.
It's about keeping humanity in the system.
Traditional "human-in-the-loop" AI treats people as quality checkers—reviewing outputs, fixing errors, approving results.
Human-Led Intelligence™ embeds human values, ethical guardrails, and authentic voice preservation into the system itself. Users lead with their mission; AI accelerates their strategy.
OUR ETHICAL PRINCIPLES
1. TRANSPARENCY
We Commit To:
Citing all sources with URLs for verification
Explaining how every score and assessment is calculated
Disclosing AI limitations and potential for errors
Being honest about what we can and cannot do
Never using "black box" algorithms without explanation
We Will Never:
Fabricate grant opportunities, funders, or data
Hide how our systems work
Claim AI accuracy, we can't deliver
Use proprietary methods to obscure simple truths
What This Looks Like:
Every grant search result includes the source URL
Readiness scores break down into specific factors
If we can't find verified information, we say so explicitly
Our methodology is publicly documented
2. AUTHENTICITY
We Commit To:
Preserving users' authentic organizational voice
Avoiding generic nonprofit jargon and templates
Providing multiple phrasing options (never just one)
Keeping community context and mission central
Empowering users to sound like themselves, not like "AI"
We Will Never:
Erase organizational identity with templated language
Impose a one-size-fits-all communication style
Value polish over authenticity
Encourage users to sound like anyone other than themselves
What This Looks Like:
Users share their mission and signature phrases
AI mirrors their tone and terminology
Proposals sound like the organization, not a robot
Placeholders keep content customizable
3. ACCURACY & NON-FABRICATION
We Commit To:
Providing only verified, publicly available grant information
Flagging when data may be outdated or unverified
Building guardrails to prevent AI hallucination
Refusing to fabricate statistics, funders, or opportunities
Encouraging users to verify everything we provide
We Will Never:
Invent grant opportunities that don't exist
Make up foundation names, deadlines, or award amounts
Claim success rates or statistics we can't verify
Encourage users to apply to fake opportunities
What This Looks Like:
If a grant can't be verified, we say "I couldn't locate grants matching these criteria"
We suggest alternative search strategies instead of inventing results
Every opportunity includes a verifiable source link
Our anti-fabrication protocol is enforced in the system
4. RESPONSIBLE USE
We Commit To:
Refusing to help users misrepresent capacity or credentials
Flagging ethical concerns in user requests
Offering ethical alternatives to problematic approaches
Requiring human review before any submission
Respecting grant ethics and funder requirements
We Will Never:
Help users lie about their organizational capacity
Encourage applications where users are clearly ineligible
Assist in plagiarizing other proposals
Enable fabrication of community support or partnerships
Facilitate unethical use even if requested
What This Looks Like:
If a user asks to exaggerate impact data, we refuse and offer ethical alternatives
We flag conflicts of interest and compliance issues
We remind users that AI-generated content requires human review
We maintain ethical boundaries even when users push back
5. EQUITY & ACCESS
We Commit To:
Making grant strategy accessible to under-resourced organizations
Pricing that doesn't exclude small nonprofits and founders
Supporting women-owned, minority-owned, and community-based organizations
Building tools that reduce barriers, not create new ones
Centering voices often marginalized in traditional funding
We Will Never:
Price tools only for wealthy organizations
Design features that assume expensive infrastructure
Ignore accessibility needs of diverse users
Perpetuate existing inequities in funding systems
What This Looks Like:
Free 30-day demo with full feature access
Lifetime pricing options for budget-conscious users
Founding Member pricing locks in affordability
Simple, jargon-free language throughout
No requirement for paid ChatGPT subscriptions
6. PRIVACY & CONFIDENTIALITY
We Commit To:
Never selling user data to third parties
Collecting only essential information
Being transparent about what data we collect and why
Giving users control over their information
Protecting sensitive organizational details
We Will Never:
Share proposal content with third parties
Use user data to train AI without permission
Monetize user information
Store more data than necessary to provide services
What This Looks Like:
We don't have access to your GPT conversations (that's controlled by OpenAI)
Email addresses used only for service delivery and communications you opted into
Clear privacy policy in plain language
Easy data deletion upon request
GRANT ETHICS COMPLIANCE
Standards We Uphold
AI Grant Strategist users must:
Submit only truthful, accurate information in applications
Verify eligibility requirements before applying
Disclose conflicts of interest as required by funders
Use awarded funds only for stated purposes
Report outcomes honestly and accurately
We refuse to assist with:
Fabricating organizational capacity or credentials
Misrepresenting community support or partnerships
Inflating impact metrics or performance data
Plagiarizing from other proposals or sources
Applying to grants where organizations are clearly ineligible
Consequences of Violations
Users who violate grant ethics may face:
Warning for minor first-time violations
Suspension for repeated violations
Permanent account termination for serious violations (fraud, misrepresentation)
No refunds for ethics-based terminations
Potential reporting to funders or authorities for egregious violations
BIAS & FAIRNESS
Our Commitment
We actively work to identify and mitigate bias in AI Grant Strategist:
Data Bias:
We don't train on biased grant databases that favor certain org types
We provide equal guidance regardless of org size, budget, or prestige
Our readiness scoring doesn't penalize smaller or newer organizations unfairly
Language Bias:
We preserve authentic organizational voice (not "professionalized" to sound corporate)
We don't impose white/Western/academic language norms
We offer multiple phrasing options to accommodate different communication styles
Access Bias:
Affordable pricing ensures small organizations aren't excluded
No requirement for expensive tools or subscriptions beyond basic ChatGPT access
Plain-language design accessible to users with varying technical expertise
Ongoing Work
We recognize that bias is pervasive and our work is ongoing:
We monitor outputs for problematic patterns
We incorporate user feedback on fairness issues
We update training and guardrails based on real-world use
We commit to continuous improvement, not claims of perfection
ACCOUNTABILITY & TRANSPARENCY
How We Hold Ourselves Accountable
User Feedback:
We listen to and act on user concerns
Feedback channels: support@herintelai.com
Response commitment: Within 24 hours
Founding Member Input:
Early users help shape product roadmap
Direct influence on feature priorities
Ongoing dialogue about ethical concerns
Public Documentation:
Methodology published on our website
Regular updates on product changes
Transparent communication about limitations
Independent Review:
Technical advisors review configurations (hi, Claude!)
Legal review of terms and privacy policies
Openness to third-party audits as we grow
What We Ask of Users
You are responsible for:
Reviewing all AI-generated content before submission
Verifying grant eligibility directly with funders
Ensuring accuracy and compliance with funder requirements
Using AI Grant Strategist ethically and legally
Reporting concerns or issues you encounter
We cannot:
Guarantee funding success
Replace human judgment and expertise
Take responsibility for content submitted without review
Control grant decisions (those belong to funders)
OUR PROMISE TO THE GRANT COMMUNITY
To Funders
We will:
Never encourage applicants to misrepresent themselves
Support compliance with your requirements
Promote authentic, high-quality applications
Reduce low-effort, template-driven submissions
Help align the right applicants with the right opportunities
We recognize that AI tools can be misused. We've built ethical guardrails specifically to protect the integrity of grant processes.
To Grant Seekers
We will:
Provide honest, transparent guidance
Preserve your authentic voice and mission
Help you compete on merit, not just polished writing
Support you in building capacity, not just writing proposals
Refuse to help you cut ethical corners
We believe the best grant strategy is one built on truth, authenticity, and genuine alignment with funder goals.
To the Field
We will:
Advance responsible AI practices in the nonprofit/social impact sector
Share learnings about ethical AI development
Advocate for transparency and accountability in AI tools
Support ecosystem-wide improvements in funding equity
Lead by example, not just by words
CONTINUOUS IMPROVEMENT
We're Not Perfect
AI systems—even ethically designed ones—can make mistakes. We commit to:
Acknowledging errors when they happen
Learning from mistakes and improving systems
Being transparent about limitations
Welcoming feedback and criticism
Iterating based on real-world use
Found an ethical concern? Email ethics@herintelai.com
We take these reports seriously and respond within 48 hours.
ETHICAL AI DEVELOPMENT PRINCIPLES
How We Build
1. Ethics from Day One
Ethics aren't bolted on after launch. They're embedded in design, development, and deployment.
2. Human Oversight
Founder-led development with technical advisors, user input, and ongoing review.
3. Transparency by Default
We document decisions, explain algorithms, and publish methodology.
4. User Agency
Users control their data, their content, and their final submissions. AI guides; humans decide.
5. Harm Prevention
We proactively identify and mitigate risks of misuse, bias, and unintended consequences.
6. Continuous Learning
We iterate based on feedback, research, and evolving best practices in AI ethics.
CONTACT US ABOUT ETHICS
Questions about our ethical practices?
Email: ethics@herintelai.com
Report a concern:
Email: ethics@herintelai.com (confidential)
General inquiries:
Email: support@herintelai.com
Response commitment: Within 48 hours for ethics inquiries
OUR ETHICAL NORTH STAR
At the end of the day, our ethical commitment is simple:
We build AI tools that help organizations do good work—honestly, authentically, and effectively.
If a feature, practice, or decision doesn't serve that goal, we don't do it.
Human • Ethical • Responsible
Not just a tagline. A standard we hold ourselves to every day.
© 2025 HERIntelAI Systems LLC